Reset phase at Encora Apprenticeship — Week 4
This post is part of a weekly assignment at Encora Apprentice and in this series, I’ll share my journey as a Software Engineer Apprentice. I hope these stories help and motivate others on their tech journey.
This week at Encora
This week was filled with science, Ignite Talks, Pretotyping, and some fancy topics. There is a lot to cover in this post, so let’s get started!
It’s about science
Quantum computing
In Programming the Universe, Seth Lloyd talks about the information processing revolutions, and how each depends on the previous one, and going backward, we can find the most fundamental one, also making a claim: The Big Bang was an information processing revolution.
Lloyd also talks about quantum computing, and some basic concepts, like the definition of qubits, and how they represent 1 and 0 at the same time. They are able to do this since, in classical computing, bits represent orthogonal vectors such as

but in quantum computing, qubits represent

such that

that basically represents the probabilities of that qubit representing 0 or 1. Lloyd also talks about the importance of isolating qubits when doing operations.
It is rude to look at someone else’s quantum computer.
On the topic of quantum computing, this week I also got to learn a little about quantum computing and machine learning, which explains the idea of mapping machine learning algorithms into operations that can be done using quantum computing concepts. The goal of using quantum computing for machine learning is to have exponentially faster algorithms.
Having your own black box
In Why you should have your own black box, Matthew Syed talks about the importance of having a culture that incentivizes learning, this is called the growth mindset culture. This culture aims to always work on learning, have incremental improvement with marginal gains that add up.
This culture is also the counterpart of high-blame culture, where errors are always covered. The ideal is to uncover those errors and learn from them.
Wolfram Alpha
In Computing a theory of everything, Stephen Wolfram talks about the idea of computation and the abysmal complexity difference between the simplest universal cellular automaton and the simples universal Turing Machine.
Wolfram also mentions the idea of computing knowledge to produce answers to questions, which is the purpose of Wolfram Alpha, a knowledge system that consists of 8 million (up until 2010) of Mathematica code. Wolfram Alpha uses precise, formal language that calls on real-world data.
Finally, Wolfram also talks about the idea of describing the universe using computation and the question, can the universe be described with a simple set of rules in a program?
About Feynman
Richard Feynman (1918–1988) was a theoretical physicist with many contributions to science and very interesting life. This week I got the opportunity to learn about him, like his contributions to the Manhattan Project and the invention of pipelining (a standard technique for parallel computing). He also worked in the Quantum Electrodynamic field and invented the Feynman diagrams and developed the idea of the quantum computer, which doesn’t work like a Turing Machine (classical computing).
There are many stories about Feynman, like his work with Danny Hillis, or about how he played bongoes and wanted to travel to Tuva to learn about the native style of throat singing.
Finally, I got to watch a lecture about the Scientific Method by Feynman, where he describes the flow of having an idea (the guess), computing the consequences of that idea, and comparing it with nature with experiments or experience.
Pretotyping manifesto
Pretotyping is the set of practices and processes for validating the market appeal and the actual usage of a potential new product. The main differences between pretotyping and prototyping are:
- Investment. Pretotyping takes an investment of hours/days and prototypes can take up to days/weeks.
- Main question. Pretotypes answer if a product would be used, prototypes if a product can be built.
The advantage of pretotyping is that their quicker to put together and they help us fail faster and jump into the next idea. Since we don’t spend too much time on each idea, is easier to admit failure when it doesn’t work.
Speaking of pretotyping, this week I got the opportunity to work on my own pretotypes, you can find the results here!
Testing and automation
Continuous integration at Google
John Micco has an amazing conference about how tests are being executed at Google (2012). The approach that is mentioned is real-time information about the builds, which helps to identify failures fast, identify culprits of changes and handle flaky tests.
The differences between this approach and the traditional continuous integration are that with Google’s continuous build system the tests are triggered on every change instead of waiting for another continuous cycle, the use of fine-grained dependencies instead of all dependencies, and finally, it is easier to identify which change broke which test.
Of course, this approach comes with a cost, since it requires an enormous amount of computing resources (even for Google), and this becomes a bigger problem as test execution time grows too. A way to deal with this is to incentivize teams to optimize the use of shared test resources and have smart scheduling for testing.
Testing engineering @ Google and the release process for Google Chrome for iOS
In this awesome conference, I got to learn about two different topics, the first one being Testing engineering, by Ivan Ho. He explains the differences between a software engineer in tests and a test engineer.

Also, Lindsay Pasricha talks about the release process for Google products for iOS, and back in 2014 testing frameworks for mobile development weren’t as good as their desktop counterparts.
One of the main problems with iOS development was precisely that testing was very hard to do, and the number of users was very different between what could be tested and when it was on production.
Testing in production
Gareth Bowles talks about the way Netflix tests in production, and this is because of the number of users that use this platform at the same time when is on production, is impossible to test for this kind of traffic in a testing environment. He also talks about everything that can go wrong when using cloud services, like services going down in a given availability zone, the entire availability zone going down, the loss of service in an entire region, among other possible problems.
Bowles also talks about the way tests are done in production, using the method of the simian army, which consists of cause intended bugs/errors, mainly focused on the infrastructure in the cloud, to make sure the system is resilient enough to tolerate these faults.
Test coverage at Google
Andrei Chirila talks about the way code review is (2014) done in Google, and also about test coverage, which is an indicator of how much the tests exercise the code. There are different types of coverage, like function (was this function called), statement (was this line of code executed?), branch (was an edge in the program executed), etc.
In this talk, Chirila also mentions the rule of thumb: 85% or more for test coverage, although is not set in stone.
Testing user experience
In this conference, Alex Eagle talks about the importance of having a test culture within the team. But testing is a hard problem since it can get really expensive in terms of computing power.
He also talks about building testing tools that engineers actually use, to make sure tests are run and passed. Also about making tests that give insights into the problem if they fail.
Eagle mentions the term “breakage”, which is the set of tests that fail because of one root cause. The lifecycle of the breakage is the following:
- Do we need a human to take action?
- Find the right assignee and route communication.
- Explain the problem and likely causes.
- Success metric: how quickly it was resolved.
Chaos engineering
In this post, Tammy Butow talks about Chaos Engineering, which is defined as “the disciplined approach to identifying failures before they become outages”, which in practice means to break things on purpose to see what really happens.
This is very related to what we talked about in the “Testing in production” section, where Netflix purposefully causes errors in their infrastructure to see what really happens. Chaos engineering is very helpful for testing and understanding distributed and microservices architectures, and many big companies use it.
Kolton Andrus also talks about chaos engineering and compares it to how when you vaccinate yourself with failure now, you are immune to it in the future.
About Continuous Development
For my Ignite Talk, I got to study Continuous Development, which is defined as an umbrella term that encapsulates many processes from the DevOps movement. These processes share many traits with agile development, such as being iterative, automated, and sharing the goal to deliver to the user as quickly as possible.
Some of the processes that are described are:
- Continuous integration, which is the practice of frequently merging e changes done by developers. This is done using a central repository and running unit tests to make sure to not introduce any regression to the code.
- Continuous delivery, which is the practice of always maintaining the code in a deployable state. This is done by running tests (integration, UI, stress) on the code and putting it into a non-production environment.
- Continuous deployment, which is the practice of frequently deploying small changes into production.

Recap
This week I got the opportunity to learn more about testing, and I also learn about how to test a core process of CI/CD pipelines, which helps with software quality and to catch bugs before going to production.
This is also the last week of assignments for the Reset Phase, and I’m very excited to see what the next face will hold for us!