January 23, 2020

16766 words 79 mins read



Linux, Jenkins, AWS, SRE, Prometheus, Docker, Python, Ansible, Git, Kubernetes, Terraform, OpenStack, SQL, NoSQL, Azure, GCP, DNS, Elastic, Network, Virtualization

repo name bregman-arie/devops-exercises
repo link https://github.com/bregman-arie/devops-exercises
language Python
size (curr.) 2271 kB
stars (curr.) 4802
created 2019-10-03
license Other

DevOps Questions & Exercises

:information_source:  This repo contains questions and exercises on various technical topics, sometimes related to DevOps and SRE :)

:bar_chart:  There are currently 918 questions

:warning:  You can use these for preparing for an interview but most of the questions and exercises don’t represent an actual interview. Please read Q&A for more details

:thought_balloon:  If you wonder “How to prepare for a DevOps interview?”, you might want to read some of my suggestions here

:pencil:  You can add more questions and exercises by submitting pull requests :) You can read more about it here

:books:  To learn more about DevOps and SRE, check the resources in devops-resources repository


:baby: Beginner


“DevOps is the combination of cultural philosophies, practices, and tools that increases an organization’s ability to deliver applications and services at high velocity: evolving and improving products at a faster pace than organizations using traditional software development and infrastructure management processes. This speed enables organizations to better serve their customers and compete more effectively in the market.”


“DevOps is the union of people, process, and products to enable continuous delivery of value to our end users. The contraction of “Dev” and “Ops” refers to replacing siloed Development and Operations to create multidisciplinary teams that now work together with shared and efficient practices and tools. Essential DevOps practices include agile planning, continuous integration, continuous delivery, and monitoring of applications.”

Red Hat:

“DevOps describes approaches to speeding up the processes by which an idea (like a new software feature, a request for enhancement, or a bug fix) goes from development to deployment in a production environment where it can provide value to the user. These approaches require that development teams and operations teams communicate frequently and approach their work with empathy for their teammates. Scalability and flexible provisioning are also necessary. With DevOps, those that need power the most, get it—through self service and automation. Developers, usually coding in a standard development environment, work closely with IT operations to speed software builds, tests, and releases—without sacrificing reliability.”

You should mention some or all of the following:

  • Collaboration
  • Improved delivery
  • Security
  • Speed
  • Scale
  • Reliability

Make sure to elaborate :)

  • Not allowing to push in production on Friday :)
  • One specific person is in charge of different tasks. For example there is only one person who is allowed to merge the code of everyone else
  • Treating production differently from development environment. For example, not implementing security in development environment

A development practice where developers integrate code into a shared repository frequently. It can range from a couple of changes every day or a week to a couple of changes in one hour in larger scales.

Each piece of code (change/patch) is verified, to make the change is safe to merge. Today, it’s a common practice to test the change using an automated build that makes sure the code can integrated. It can be one build which runs several tests in different levels (unit, functional, etc.) or several separate builds that all or some has to pass in order for the change to be merged into the repository.

  • CI/CD
  • Provisioning infrastructure
  • Configuration Management
  • Monitoring & alerting
  • Logging
  • Code review
  • Code coverage
  • Tests
  • CI/CD - Jenkins, Circle CI, Travis
  • Provisioning infrastructure - Terraform, CloudFormation
  • Configuration Management - Ansible, Puppet, Chef
  • Monitoring & alerting - Prometheus, Nagios
  • Logging - Logstash, Graylog, Fluentd
  • Code review - Gerrit, Review Board
  • Code coverage - Cobertura, Clover, JaCoCo
  • Tests - Robot, Serenity, Gauge

In your answer you can mention one or more of the following:

  • mature vs. cutting edge
  • community size
  • architecture aspects - agent vs. agentless, master vs. masterless, etc.

In mutable infrastructure paradigm, changes are applied on top of the existing infrastructure and over time the infrastructure builds up a history of changes. Ansible, Puppet and Chef are examples of tools which follow mutable infrastructure paradigm.

In immutable infrastructure paradigm, every change is actually a new infrastructure. So a change to a server will result in a new server instead of updating it. Terraform is an example of technology which follows the immutable infrastructure paradigm.

  • Archive - collect all your app files into one archive (e.g. tar) and deliver it to the user.
  • Package - depends on the OS, you can use your OS package format (e.g. in RHEL/Fefodra it’s RPM) to deliver your software with a way to install, uninstall and update it using the standard packager commands
  • Images - Either VM or container images where your package is included with everything it needs in order to run successfully.

Stateless applications don’t store any data in the host which makes it ideal for horizontal scaling and microservices. Stateful applications depend on the storage to save state and data, typically databases are stateful applications.

Styling, unit, functional, API, integration, smoke, scenario, …

You should be able to explain those that you mention.

It can be as simple as one Ansible (or other CM tool) task that runs periodically with Cron. In more advanced cases, perhaps a CI system.

Reliability, when used in DevOps context, is the ability of a system to recover from infrastructure failure or disruption. Part of it is also being able to scale based on your organization or team demands.

One can argue whether it’s per company definition or a global one but at least according to a large companies, like Google for example, the SRE team is responsible for availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning of their services

:star: Advanced

Configuration drift happens when in an environment of servers with the exact same configuration and software, a certain server or servers are being applied with updates or configuration which other servers don’t get and over time these servers become slightly different than all others.

This situation might lead to bugs which hard to identify and reproduce.

Note: cross-dependency is when you have two or more changes to separate projects and you would like to test them in mutual build instead of testing each change separately.


:baby: Beginner

Jenkins is an open source automation tool written in Java with plugins built for Continuous Integration purpose. Jenkins is used to build and test your software projects continuously making it easier for developers to integrate changes to the project, and making it easier for users to obtain a fresh build. It also allows you to continuously deliver your software by integrating with a large number of testing and deployment technologies.

Jenkins integrates development life-cycle processes of all kinds, including build, document, test, package, stage, deploy, static analysis and much more.

  • Travis
  • Bamboo
  • Teamcity
  • CircleCI
  • Job
  • Build
  • Plugin
  • Slave
  • Executor

You can describe the UI way to add new slaves but better to explain how to do in a way that scales like a script or using dynamic source for slaves like one of the existing clouds.

:star: Advanced

  • Testing cross-dependencies (changes from multiple projects together)
  • Starting builds from any stage (although cloudbees implemented something called checkpoints)

Jenkins Dev

Jenkins Integration


:baby: Beginner

  • Pay as you go (or consumption-based payment) - you are paying only for what you are using. No upfront payments and payment stops when resources are no longer used.
  • Scalable - resources are scaled down or up based on demand


  • Public
  • Hybrid
  • Private

In cloud providers, someone else owns and manages the hardware, hire the relevant infrastructure teams and pays for real-estate (for both hardware and people). You can focus on your business.

In On-Premise solution, it’s quite the opposite. You need to take care of hardware, infrastructure teams and pay for everything which can be quite expensive. On the other hand it’s tailored to your needs.

The main idea behind serverless computing is that you don’t need to manage the creation and configuration of server. All you need to focus on is splitting your app into multiple functions which will be triggered by some actions.

It’s important to note that:

  • Serverless Computing is still using servers. So saying there are no servers in serverless computing is completely wrong
  • Serverless Computing allows you to have a different paying model. You basically pay only when your functions are running and not when the VM or containers are running as in other payment models


:baby: Beginner

Global Infrastructure

  • Availability zone
  • Region
  • Edge location AWS regions are data centers hosted across different geographical locations worldwide, each region is completely independent of one another.

Within each region, there are multiple isolated locations known as Availability Zones. Multiple availability zones ensure high availability in case one of them goes down.

Edge locations are basically content delivery network which caches data and insures lower latency and faster delivery to the users in any location. They are located in major cities in the world.



A way for allowing a service of AWS to use another service of AWS. You assign roles to AWS resources.

Policies documents used to give permissions as to what a user, group or role are able to do. Their format is JSON.



Stop the instance, the type of the instance to match the desired RAM and start the instance.




  • Origin
  • Edge location
  • Distribution


A transport solution which was designed for transferring large amounts of data (petabyte-scale) into and out the AWS cloud.

Load Balancers
  • Application LB - layer 7 traffic
  • Network LB - ultra-high performances or static IP address
  • Classic LB - low costs, good for test or dev environments
AWS Security
  • AWS Inspector
  • AWS Artifact
  • AWS Shield


AWS Databases

  1. Multi AZ - great for Disaster Recovery
  2. Read Replicas - for better performances
  • You can confirm your suspicion by going to AWS Redshift console and see running queries graph. This should tell you if there are any long-running queries.
  • If confirmed, you can query for running queries and cancel the irrelevant queries
  • Check for connection leaks (query for running connections and include their IP)
  • Check for table locks and kill irrelevant locking sessions

Amazon Elasticache is a fully managed Redis or Memcached in-memory data store.
It’s great for use cases like two-tier web applications where the most frequently accesses data is stored in ElastiCache so response time is optimal.

A MySQL & Postgresql based relational database. Great for use cases like two-tier web applications that has a MySQL or Postgresql database layer and you need automated backups for your application.

AWS Networking

Identify the service or tool




Cost Explorer

Trusted Advisor

AWS Snowball


Amazon Aurora

AWS Database Migration Service

AWS CloudTrail

AWS Misc

  • CloudTrail
  • CloudWatch
  • CloudSearch


:baby: Beginner

Ethernet simply refers to the most common type of Local Area Network (LAN) used today. A LAN—in contrast to a WAN (Wide Area Network), which spans a larger geographical area—is a connected network of computers in a small area, like your office, college campus, or even home.

A set of protocols that define how two or more devices can communicate with each other. To learn more about TCP/IP, read here

A MAC address is a unique identification number or code used to identify individual devices on the network.

Packets that are sent on the ethernet are always coming from a MAC address and sent to a MAC address. If a network adapter is receiving a packet, it is comparing the packet’s destination MAC address to the adapter’s own MAC address.

An Internet Protocol address (IP address) is a numerical label assigned to each device connected to a computer network that uses the Internet Protocol for communication.An IP address serves two main functions: host or network interface identification and location addressing.

A Subnet mask is a 32-bit number that masks an IP address, and divides the IP address into network address and host address. Subnet Mask is made by setting network bits to all “1"s and setting host bits to all “0"s. Within a given network, two host addresses are reserved for special purpose, and cannot be assigned to hosts. The “0” address is assigned a network address and “255” is assigned to a broadcast address, and they cannot be assigned to hosts.

For Example

| Address Class | No of Network Bits | No of Host Bits | Subnet mask     | CIDR notation |
| ------------- | ------------------ | --------------- | --------------- | ------------- |
| A             | 8                  | 24              |       | /8            |
| A             | 9                  | 23              |     | /9            |
| A             | 12                 | 20              |     | /12           |
| A             | 14                 | 18              |     | /14           |
| B             | 16                 | 16              |     | /16           |
| B             | 17                 | 15              |   | /17           |
| B             | 20                 | 12              |   | /20           |
| B             | 22                 | 10              |   | /22           |
| C             | 24                 | 8               |   | /24           |
| C             | 25                 | 7               | | /25           |
| C             | 28                 | 4               | | /28           |
| C             | 30                 | 2               | | /30           |

  • Application: user end (HTTP is here)
  • Presentation: establishes context between application-layer entities (Encryption is here)
  • Session: establishes, manages and terminates the connections
  • Transport: transfers variable-length data sequences from a source to a destination host (TCP & UDP are here)
  • Network: transfers datagrams from one network to another (IP is here)
  • Data link: provides a link between two directly connected nodes (MAC is here)
  • Physical: the electrical and physical spec the data connection (Bits are here)

You can read more about the OSI model in penguintutor.com

  • Error correction

  • Packets routing

  • Cables and electrical signals

  • MAC address

  • IP address

  • Terminate connections

  • 3 way handshake

  • Error correction

  • Packets routing - Network

  • Cables and electrical signals - Physical

  • MAC address - Data link

  • IP address - Network

  • Terminate connections - Session

  • 3 way handshake - Transport

Unitcast: One to one communication where there is one sender and one receiver.

Broadcast: Sending a message to everyone in the network. The address ff:ff:ff:ff:ff:ff is used for broadcasting. Two common protocols which use broadcast are ARP and DHCP.

Multicast: Sending a message to a group of subscribers. It can be one-to-many or many-to-many.

CSMA/CD stands for Carrier Sense Multiple Access / Collision Detection. Its primarily focus it to manage access to shared medium/bus where only one host can transmit at a given point of time.

CSMA/CD algorithm:

  1. Before sending a frame, it checks whether another host already transmitting a frame.
  2. If no one transmitting, it starts transmitting the frame.
  3. If two hosts transmitted at the same time, we have a collision.
  4. Both hosts stop sending the frame and they send to everyone a ‘jam signal’ notifying everyone that a collision occurred
  5. They are waiting for a random time before sending again
  6. Once each host waited for a random time, they try to send the frame again and so the
  • router
  • switch
  • hub

A router is a physical or virtual appliance that passes information between two or more packet-switched computer networks. A router inspects a given data packet’s destination Internet Protocol address (IP address), calculates the best way for it to reach its destination and then forwards it accordingly.

Network Address Translation (NAT) is a process in which one or more local IP address is translated into one or more Global IP address and vice versa in order to provide Internet access to the local hosts.

A proxy server acts as a gateway between you and the internet. It’s an intermediary server separating end users from the websites they browse.

If you’re using a proxy server, internet traffic flows through the proxy server on its way to the address you requested. The request then comes back through that same proxy server (there are exceptions to this rule), and then the proxy server forwards the data received from the website to you.

roxy servers provide varying levels of functionality, security, and privacy depending on your use case, needs, or company policy.

TCP 3-way handshake or three-way handshake is a process which is used in a TCP/IP network to make a connection between server and client.

A three-way handshake is primarily used to create a TCP socket connection. It works when:

  • A client node sends a SYN data packet over an IP network to a server on the same or an external network. The objective of this packet is to ask/infer if the server is open for new connections.
  • The target server must have open ports that can accept and initiate new connections. When the server receives the SYN packet from the client node, it responds and returns a confirmation receipt – the ACK packet or SYN/ACK packet.
  • The client node receives the SYN/ACK from the server and responds with an ACK packet.

TCP establishes a connection between the client and the server to guarantee the order of the packages, on the other hand, UDP does not establish a connection between client and server and doesn’t handle package order. This makes UDP more lightweight than TCP and a perfect candidate for services like streaming.

Penguintutor.com provides a good explanation.

A default gateway serves as an access point or IP router that a networked computer uses to send information to a computer in another network or the internet.

ARP stands for Address Resolution Protocol. When you try to ping an IP address on your local network, say, your system has to turn the IP address into a MAC address. This involves using ARP to resolve the address, hence its name.

Systems keep an ARP look-up table where they store information about what IP addresses are associated with what MAC addresses. When trying to send a packet to an IP address, the system will first consult this table to see if it already knows the MAC address. If there is a value cached, ARP is not used.

The exact meaning is usually depends on the context but overall data plane refers to all the functions that forward packets and/or frames from one interface to another while control plane refers to all the functions that make use of routing protocols.

There is also “Management Plane” which refers to monitoring and management functions.

:star: Advanced



:baby: Beginner

An open question. Answer based on your real experience. You can highlight one or more of the following:

  • Troubleshooting & Debugging
  • Storage
  • Networking
  • Development
  • Deployments
  • ls

  • rm

  • rmdir (can you achieve the same result by using rm?)

  • grep

  • wc

  • curl

  • touch

  • man

  • nslookup or dig

  • df

  • ls - list files and directories. You can highlight common flags like -d, -a, -l, …

  • rm - remove files and directories. You should mention -r for recursive removal

  • rmdir - remove directories but you should mention it’s possible to use rm for that

  • grep - print lines that match patterns. Could be nice to mention -v, -r, -E flags

  • wc - print newline, word, and byte counts

  • curl - tranfer a URL or mention common usage like downloading files, API calls, …

  • touch - update timestamps but common usage is to create files

  • man - reference manuals

  • nslookup or dig - query nameservers

  • df - provides info regarding file system disk space usage

As to fix it there are serveral options:

  1. Manually adding what you need to your $PATH PATH="$PATH”:/user/bin:/..etc
  2. You have your weird env variables backed up.
  3. You would look for your distro default $PATH variable, copy paste using method #1

Note: There are many ways of getting errors like this: if bash_profile or any configuration file of your interpreter was wrongly modified; causing erratics behaviours, permissions issues, bad compiled software (if you compiled it by yourself)… there is no answer that will be true 100% of the time.

You can use the commands cron and at. With cron, tasks are scheduled using the following format:

*/30 * * * * bash myscript.sh Executes the script every 30 minutes.

The tasks are stored in a cron file, you can write in it using crontab -e

Alternatively if you are using a distro with systemd it’s recommended to use systemd timers.

Normally you will schedule batch jobs.


Using the chmod command.

  • 777
  • 644
  • 750

  • No more disk space
  • No more indoes
  • No permissions

A daemon is a program that runs in the background without direct control of the user, although the user can at any time talk to the daemon.

systemd has many features such as user processes control/tracking, snapshot support, inhibitor locks..

If we visualize the unix/linux system in layers, systemd would fall directly after the linux kernel.

Hardware -> Kernel -> Daemons, System Libraries, Server Display.


Debugging (Beginner)

dstat -t is great for identifying network and disk issues. netstat -tnlaup can be used to see which processes are running on which ports. lsof -i -P can be used for the same purpose as netstat. ngrep -d any metafilter for matching regex against payloads of packets. tcpdump for capturing packets wireshark same concept as tcpdump but with GUI (optional).

dstat -t is great for identifying network and disk issues. opensnoop can be used to see which files are being opened on the system (in real time).

strace is great for understanding what your program does. It prints every system call your program executed.

top will show you how much CPU percentage each process consumes perf is a great choice for sampling profiler and in general, figuring out what your CPU cycles are “wasted” on flamegraphs is great for CPU consumption visualization (http://www.brendangregg.com/flamegraphs.html)

  • Check with top for anything unusual
  • Run dstat -t to check if it’s related to disk or network.
  • Check if it’s network related with sar
  • Check I/O stats with iostat


  • grep ‘[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}’ some_file
  • grep -E “error|failure” some_file
  • grep ‘[0-9]$’ some_file
  1. An IP address
  2. The word “error” or “failure”
  3. Lines which end with a number

Another way to ask this: what happens from the moment you turned on the server until you get a prompt

An exit code (or return code) represents the code returned by a child process to its parent process.

0 is an exit code which represents success while anything higher than 1 represents error. Each number has different meaning, based on how the application was developed.

I consider this as a good blog post to read more about it: https://shapeshed.com/unix-exit-codes

Storage & Filesystem (Beginner)

For each file (and directory) in Linux there is an inode, a data structure which stores meta data related to the file like its size, owner, permissions, etc.

  • Link count
  • File size
  • File name
  • File timestamp

Hard link is the same file, using the same inode. Soft link is a shortcut to another file, using a different inode.



  • PV
  • VG
  • LV
  • sed “s/1/2/g' /tmp/myFile
  • find . -iname *.yaml -exec sed -i “s/1/2/g” {} ;
  • /tmp
  • /var/log
  • /bin
  • /usr/local


You can achieve that by specifying & at end of the command. As to why, since some commands/processes can take a lot of time to finish execution or run forever

SIGTERM - default signal for terminating a process SIGHUP - common usage is for reloading configuration SIGKILL - a signal which cannot caught or ignored

To view all available signals run kill -l

A process which has finished to run but has not exited.

One reason it happens is when a parent process is programmed incorrectly. Every parent process should execute wait() to get the exit code from the child process which finished to run. But when the parent isn’t checking for the child exit code, the child process can still exists although it finished to run.

You can’t kill a zombie process the regular way with kill -9 for example as it’s already dead.

One way to kill zombie process is by sending SIGCHLD to the parent process telling it to terminate its child processes. This might not work if the parent process wasn’t programmed properly. The invocation is kill -s SIGCHLD [parent_pid]

You can also try closing/terminating the parent process. This will make the zombie process a child of init (1) which does periodic cleanups and will at some point clean up the zombie process.

  • Processes executed/owned by a certain user
  • Process which are Java processes
  • Zombie Processes

If you mention at any point ps command with arugments, be familiar with what these arguments does exactly.

find /some_dir -iname *.yml -print0 | xargs -0 -r sed -i “s/1/2/g”

You can use the commands top and free

The ls executable is built for an incompatible architecture.

You can use the split command this way: split -l 25 some_file

In Linux (and Unix) the first three file descriptors are:

  • 0 - the default data stream for input
  • 1 - the default data stream for output
  • 2 - the default data stream for output related to errors

This is a great article on the topic: https://www.computerhope.com/jargon/f/file-descriptor.htm

Linux Security

One of the following would work:

netstat -tnlp | grep <port_number>
lsof -i -n -P | grep <port_number>

Technically, yes.

  • SSH

  • HTTP

  • DNS


  • SSH - 22

  • HTTP - 80

  • DNS - 53

  • HTTPS - 443

Using nc is one way

One way would be ping6 ff02::1

Linux DNS

You can specify one or more of the following:

  • dig
  • nslookup
Applications and Services

Depends on the init system.

Systemd: systemctl enable [service_name] System V: update-rc.d [service_name] and add this line id:5678:respawn:/bin/sh /path/to/app to /etc/inittab Upstart: add Upstart init script at /etc/init/service.conf

  1. SSH server is not installed
  2. SSH server is not running
  • adduser user_name –shell=/bin/false –no-create-home

Re-install the OS IS NOT the right answer :)

Random and perhaps useless :)

ls, wc, dd, df, du, ps, ip, cp, cd …

It’s used in commands to mark the end of commands options. One common example is when used with git to discard local changes: git checkout -- some_file


:star: Advanced

System Calls

fork() is used for creating a new process. It does so by cloning the calling process but the child process has its own PID and any memory locks, I/O operations and semaphores are not inherited.

wait() is used by a parent process to wait for the child process to finish execution. If wait is not used by a parent process then a child process might become a zombie process.

Executes a program. The program is passed as a filename (or path) and must be a binary executable or a script.

  • Shell reads the input using getline() which reads the input file stream and stores into a buffer as a string

  • The buffer is broken down into tokens and stored in an array this way: {“ls”, “-l”, “NULL”}

  • Shell checks if an expansion is required (in case of ls *.c)

  • Once the program in memory, its execution starts. First by calling readdir()


  • getline() originates in GNU C library and used to read lines from input stream and stores those lines in the buffer

Linux Filesystem & Files

There are a couple of ways to do that:

  • dd if=/dev/urandom of=new_file.txt bs=2MB count=1
  • truncate -s 2M new_file.txt
  • fallocate -l 2097152 new_file.txt
open("/my/file") = 5
read(5, "file content")

These system calls are reading the file /my/file and 5 is the file descriptor number.

Linux Networking

Another common way to task this questions is “what part of the tcp header does traceroute modify?”

This is a good article about the topic: https://ops.tips/blog/how-linux-creates-sockets


MemFree - The amount of unused physical RAM in your system MemAvailable - The amount of available memory for new workloads (without pushing system to use swap) based on MemFree, Active(file), Inactive(file), and SReclaimable.

Operating System

:baby: Beginner

There are many ways to answer that. For those who look for simplicity, the book “Operating Systems: Three Easy Pieces” offers nice version:

“responsible for making it easy to run programs (even allowing you to seemingly run many at the same time), allowing programs to share memory, enabling programs to interact with devices, and other fun stuff like that”


A process is a running program. A program is one or more instructions and the program (or process) is executed by the operating system.

It would support the following:

  • Create - allow to create new processes
  • Delete - allow to remove/destroy processes
  • State - allow to check the state of the process, whether it’s running, stopped, waiting, etc.
  • Stop - allow to stop a running process
  • The OS is reading program’s code and any additional relevant data
  • Program’s bytes are loaded into the memory or more specifically, into the address space of the process.
  • Memory is allocated for program’s stack (aka run-time stack). The stack also initialized by the OS with data like argv, argc and parameters to main()
  • Memory is allocated for program’s heap which is required for data structures like linked lists and hash tables
  • I/O initialization tasks like in Unix/Linux based systems where each process has 3 file descriptors (input, output and error)
  • OS is running the program, strarting from main()

Note: The loading of the program’s code into the memory done lazily which means the OS loads only partial relevant pieces required for the process to run and not the entire code.

False. It was true in the past but today’s operating systems perform lazy loading which means only the relevant pieces required for the process to run are loaded first.

  • Running - it’s executing instructions
  • Ready - it’s ready to run but for different reasons it’s on hold
  • Blocked - it’s waiting for some operation to complete. For example I/O disk request



Buffer: Reserved place in RAM which is used to hold data for temporary purposes Cache: Cache is usually used when processes reading and writing to the disk to make the process faster by making similar data used by different programs easily accessible.


:baby: Beginner

Even when using a system with one physical CPU, it’s possible to allow multiple users to work on it and run programs. This is possible with time sharing where computing resources are shared in a way it seems to the user the system has multiple CPUs but in fact it’s simply one CPU shared by applying multiprogramming and multi-tasking.

Somewhat the opposite of time sharing. While in time sharing a resource is used for a while by one entity and then the same resource can be used by another resource, in space sharing the space is shared by multiple entities but in a way it’s not being transfered between them. It’s used by one entity until this entity decides to get rid of it. Take for example storage. In storage, a file is your until you decide to delete it.


:baby: Beginner

  • Task
  • Module
  • Play
  • Playbook
  • Role

Task – a call to a specific Ansible module Module – the actual unit of code executed by Ansible on your own host or a remote host. Modules are indexed by category (database, file, network, …) and also referred to as task plugins.

Play – One or more tasks executed on a given host(s)

Playbook – One or more plays. Each play can be executed on the same or different hosts

Role – Ansible roles allows you to group resources based on certain functionality/service such that they can be easily reused. In a role, you have directories for variables, defaults, files, templates, handlers, tasks, and metadata. You can then use the role by simply specifying it in your playbook.

Ansible is:

  • Agentless
  • Minimal run requirements (Python & SSH) and simple to use
  • Default mode is “push” (it supports also pull)
  • Focus on simpleness and ease-of-use

While it’s possible to provision resources with Ansible it might not be the best choice for doing so as Ansible doesn’t save state by default and a task that creates 5 instances, when executed again will create additional 5 instances (unless additional check is implemented).

An inventory file defines hosts and/or groups of hosts on which Ansible tasks executed upon.

An example of inventory file:


A dynamic inventory file tracks hosts from one or more sources like cloud providers and CMDB systems.

You should use one when using external sources and especially when the hosts in your environment are being automatically spun up and shut down, without you tracking every change in these sources.

  1. Ansible online docs
  2. ansible-doc -l for list of modules and ansible [module_name] for detailed information on a specific module
- name: Create a new directory
    path: "/tmp/new_directory"
    state: directory

- name: Print information about my host
  hosts: localhost
  gather_facts: 'no'                                                                                                                                                                           
      - name: Print hostname
            msg: "It's me, {{ ansible_hostname }}"

When given a written code, always inspect it thoroughly. If your answer is “this will fail” then you are right. We are using a fact (ansible_hostname), which is a gathered piece of information from the host we are running on. But in this case, we disabled facts gathering (gather_facts: no) so the variable would be undefined which will result in failure.

- hosts: localhost
      - name: Install zlib
          name: zlib
          state: present
- hosts: all
      mario_file: /tmp/mario
          - 'zlib' 
          - 'vim'
      - name: Check for mario file
            path: "{{ mario_file }}"
        register: mario_f

      - name: Install zlib and vim if mario file exists
        become: "yes"
            name: "{{ item }}"
            state: present
        with_items: "{{ package_list }}"
        when: mario_f.stat.exists

- name: Ensure all files exist
      - item.stat.exists
  loop: "{{ files_list }}"

I'm <HOSTNAME> and my operating system is <OS>

Replace and with the actual data for the specific host you are running on

The playbook to deploy the system_info file

- name: Deploy /tmp/system_info file
  hosts: all:!controllers
      - name: Deploy /tmp/system_info
            src: system_info.j2 
            dest: /tmp/system_info

The content of the system_info.j2 template

# {{ ansible_managed }}
I'm {{ ansible_hostname }} and my operating system is {{ ansible_distribution }

  • role defaults -> whoami: mario
  • extra vars (variables you pass to Ansible CLI with -e) -> whoami: toad
  • host facts -> whoami: luigi
  • inventory variables (doesn’t matter which type) -> whoami: browser

According to variable precedence, which one will be used?

The right answer is ‘toad’.

Variable precedence is about how variables override each other when they set in different locations. If you didn’t experience it so far I’m sure at some point you will, which makes it a useful topic to be aware of.

In the context of our question, the order will be extra vars (always override any other variable) -> host facts -> inventory variables -> role defaults (the weakest).

A full list can be found at the link above. Also, note there is a significant difference between Ansible 1.x and 2.x.

  • A module is a collection of tasks
  • It’s better to use shell or command instead of a specific module
  • Host facts override play variables
  • A role might include the following: vars, meta, and handlers
  • Dynamic inventory is generated by extracting information from external sources
  • It’s a best practice to use indention of 2 spaces instead of 4
  • ‘notify’ used to trigger handlers
  • This “hosts: all:!controllers” means ‘run only on controllers group hosts
  • Conditionals
  • Loops

:star: Advanced

def cap(self, string):
    return string.capitalize()

Goku = 9001
Vegeta = 5200
Trunks = 6000
Gotenks = 32

With one task, switch the content to:

Goku = 9001
Vegeta = 250
Trunks = 40
Gotenks = 32
- name: Change saiyans levels
    dest: /tmp/exercise
    regexp: "{{ item.regexp }}"
    line: "{{ item.line }}"
    - { regexp: '^Vegeta', line: 'Vegeta = 250' }
    - { regexp: '^Trunks', line: 'Trunks = 40' }

Ansible Testing


:baby: Beginner

Read here

  • fully automated process of provisioning, modifying and deleting your infrastructure
  • version control for your infrastructure which allows you to quickly rollback to previous versions
  • validate infrastructure quality and stability with automated tests and code reviews
  • makes infrastructure tasks less repetitive

A common wrong answer is to say that Ansible and Puppet are configuration management tools and Terraform is a provisioning tool. While technically true, it doesn’t mean Ansible and Puppet can’t be used for provisioning infrastructure. Also, it doesn’t explain why Terraform should be used over CloudFormation if at all.

The benefits of Terraform over the other tools:

  • It follows the immutable infrastructure approach which has benefits like avoiding a configuration drift over time
  • Ansible and Puppet are more procedural (you mention what to execute in each step) and Terraform is declarative since you describe the overall desired state and not per resource or task. You can give the example of going from 1 to 2 servers in each tool. In Terraform you specify 2, in Ansible and puppet you have to only provision 1 additional server so you need to explicitly make sure you provision only another one server.
  • Provider
  • Resource
  • Provisioner

It keeps track of the IDs of created resources so that Terraform knows what it is managing.

  • terraform init
  • terraform plan
  • terraform validate
  • terraform apply

terraform init scans your code to figure which providers are you using and download them. terraform plan will let you see what terraform is about to do before actually doing it. terraform validate checks if configuration is syntactically valid and internally consistent within a directory. terraform apply will provision the resources specified in the .tf files.

You use it this way: variable “my_var” {}

It’s a resource which was successfully created but failed during provisioning. Terraform will fail and mark this resource as “tainted”.

string number bool list() set() map() object({<ATTR_NAME> = , … }) tuple([, …])

There are quite a few cases you might need to use them:

  • you want to reference resources not managed through terraform
  • you want to reference resources managed by a different terraform module
  • you want to cleanly compute a value with typechecking, such as with aws_iam_policy_document

:star: Advanced


:baby: beginner

The primary difference between containers and VMs is that containers allow you to virtualize multiple workloads on the operating system while in the case of VMs the hardware is being virtualized to run multiple machines each with its own OS.

  • Containers don’t require an entire guest operating system as VMs
  • It’s usually takes a few seconds to set up a container as opposed to VMs which can take minutes or at least more time than containers as there is an entire OS to boot and initialize as opposed to container where you mainly lunch the app itself
  • Docker is one of the technologies allowing you to manage containers - run multiple containers on a host, move containers between hosts, etc.

You should choose VMs when:

  • you need run an application which requires all the resources and functionalities of an OS
  • you need full isolation and security

You should choose containers when:

  • you need a lightweight solution
  • Running multiple versions or instances of a single application

Docker CLI passes your request to Docker daemon. Docker daemon downloads the image from Docker Hub Docker daemon creates a new container by using the image it downloaded Docker daemon redirects output from container to Docker CLI which redirects it to the standard output

docker run

Create a new image from a container’s changes

  • docker run
  • docker rm
  • docker ps
  • docker pull
  • docker build
  • docker commit
  1. To remove one or more Docker images use the docker container rm command followed by the ID of the containers you want to remove.
  2. The docker system prune command will remove all stopped containers, all dangling images, and all unused networks
  3. docker rm $(docker ps -a -q) - This command will delete all stopped containers. The command docker ps -a -q will return all existing container IDs and pass them to the rm command which will delete them. Any running containers will not be deleted.

Docker can build images automatically by reading the instructions from a Dockerfile. A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image.

COPY takes in a src and destination. It only lets you copy in a local file or directory from your host (the machine building the Docker image) into the Docker image itself. ADD lets you do that too, but it also supports 2 other sources. First, you can use a URL instead of a local file / directory. Secondly, you can extract a tar file from the source directly into the destination. Although ADD and COPY are functionally similar, generally speaking, COPY is preferred. That’s because it’s more transparent than ADD. COPY only supports the basic copying of local files into the container, while ADD has some features (like local-only tar extraction and remote URL support) that are not immediately obvious.

RUN lets you execute commands inside of your Docker image. These commands get executed once at build time and get written into your Docker image as a new layer. CMD is the command the container executes by default when you launch the built image. A Dockerfile can only have one CMD. You could say that CMD is a Docker run-time operation, meaning it’s not something that gets executed at build time. It happens when you run an image. A running image is called a container.

A common answer to this is to use hadolint project which is a linter based on Dockerfile best practices.

Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.

For example, you can use it to set up ELK stack where the services are: elasticsearch, logstash and kibana. Each running in its own container.

  • Define the services you would like to run together in a docker-compose.yml file
  • Run docker-compose up to run the services

Docker Hub is a native Docker registry service which allows you to run pull and push commands to install and deploy Docker images from the Docker Hub.

Docker Cloud is built on top of the Docker Hub so Docker Cloud provides you with more options/features compared to Docker Hub. One example is Swarm management which means you can create new swarms in Docker Cloud.

A Docker image is built up from a series of layers. Each layer represents an instruction in the image’s Dockerfile. Each layer except the very last one is read-only. Each layer is only a set of differences from the layer before it. The layers are stacked on top of each other. When you create a new container, you add a new writable layer on top of the underlying layers. This layer is often called the “container layer”. All changes made to the running container, such as writing new files, modifying existing files, and deleting files, are written to this thin writable container layer. The major difference between a container and an image is the top writable layer. All writes to the container that add new or modify existing data are stored in this writable layer. When the container is deleted, the writable layer is also deleted. The underlying image remains unchanged. Because each container has its own writable container layer, and all changes are stored in this container layer, multiple containers can share access to the same underlying image and yet have their own data state.

:star: Advanced


:baby: Beginner

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.

To understand what Kubernetes is good for, let’s look at some examples:

  • You would like to run a certain application in a container on multiple different locations. Sure, if it’s 2-3 servers/locations, you can do it by yourself but it can be challenging to scale it up to additional multiple location.

  • Performing updates and changes across hundreds of containers

  • Handle cases where the current load requires to scale up (or down)

A cluster consists of a Master (which coordinates the cluster) and Nodes where the applications are running.

The master coordinates all the workflows in the cluster:

  • Scheduling applications
  • Managing desired state
  • Rolling out new updates

A node is a virtual machine or a physical server that serves as a worker for running the applications. It’s recommended to have at least 3 nodes in Kubernetes production environment.

Kubelet is an agent running on each node and responsible for node communication with the master.

Minikube is a lightweight Kubernetes implementation. It create a local virtual machine and deploys a simple (single node) cluster.

Start by inspecting the pods status. we can use the command kubectl get pods (–all-namespaces for pods in system namespace)

If we see “Error” status, we can keep debugging by running the command kubectl describe pod [name]. In case we still don’t see anything useful we can try stern for log tailing.

In case we find out there was a temporary issue with the pod or the system, we can try restarting the pod with the following kubectl scale deployment [name] --replicas=0

Setting the replicas to 0 will shut down the process. Now start it with kubectl scale deployment [name] --replicas=1

Kubernetes Commands

  • Check the cluster status?
  • Check the status of the nodes?
  • kubectl get nodes
  • kubectl


:baby: Beginner

An expression is anything that results in a value (even if the value is None). Basically, any sequence of literals so, you can say that a string, integer, list, … are all expressions.

Statements are instructions executed by the interpreter like variable assignments, for loops and conditionals (if-else).

SOLID design principles are about:

  • Make it easier to extend the functionality of the system
  • Make the code more readable and easier to maintain


  • Single Responsibility - A class should only have a single responsibility
  • Open-Closed - An entity should be open for extension, but closed for modification. What this practically means is that you should extend functionality by adding a new code and not by modifying it. Your system should be separated into components so it can be easily extended without breaking everything.
  • Liskov Substitution - Any derived class should be able to substitute the its parent without altering its corrections. Practically, every part of the code will get the expected result no matter which part is using it
  • Interface segregation - A client should never depend on anything it doesn’t uses
  • Dependency Inversion - High level modules should depend on abstractions, not low level modules
Common algorithms
  • How does it works?
  • Can you implement it? (in any language you prefer)
  • What is the average performance of the algorithm you wrote?

It’s a search algorithm used with sorted arrays/lists to find a target value by dividing the array each iteration and comparing the middle value to the target value. If the middle value is smaller than target value, then the target value is searched in the right part of the divided array, else in the left side. This continues until the value is found (or the array divided max times)

python implementation

The average performance of the above algorithm is O(log n). Best performance can be O(1) and worst O(log n).

Code Review
  • The commit message is not important. When reviewing a change/patch one should focus on the actual change
  • You shouldn’t test your code before submitting it. This is what CI/CD exists for.



Time Complexity

  • Stack
  • Queue
  • Linked List
  • Binary Search Tree
  • Quick sort
  • Merge sort
  • Bucket Sort
  • Radix Sort

Data Structures & Types

:star: Advanced

def find_triplets_sum_to_zero(li):
    li = sorted(li)
    for i, val in enumerate(li):
        low, up = 0, len(li)-1
        while low < i < up:
            tmp = var + li[low] + li[up]
            if tmp > 0:
                up -= 1
            elif tmp < 0:
                low += 1
                yield li[low], val, li[up]
                low += 1
                up -= 1


:baby: Beginner

1. It is a high level general purpose programming language created in 1991 by Guido Van Rosum.
2. The language is interpreted, being the CPython (Written in C) the most used/maintained implementation.
3. It is strongly typed. The typing discipline is duck typing and gradual.
4. Python focuses on readability and makes use of whitespaces/identation instead of brackets { }
5. The python package manager is called PIP "pip installs packages", having more than 200.000 available packages.
6. Python comes with pip installed and a big standard library that offers the programmer many precooked solutions.
7. In python **Everything** is an object.

There are many other characteristics but these are the main ones that every python programmer should know.

Numbers (int, float, ...)

Mutability determines whether you can modify an object of specific type.

The mutable data types are:


The immutable data types are:

Numbers (int, float, ...)

You can usually use the function hash() to check an object mutability. If an object is hashable, it is immutable (although this does not always work as intended as user defined objects might be mutable and hashable).

In general, first class objects in programming languages are objects which can be assigned to variable, used as a return value and can be used as arguments or parameters. In python you can treat functions this way. Let’s say we have the following function

def my_function():
    return 5

You can then assign a function to a variables like this x = my_function or you can return functions as return values like this return my_function

It evaluates to True. The reason is that the two created empty list are different objects. x is y only evaluates to true when x and y are the same object.

char = input("Insert a character: ")
if char == "a" or char == "y" or  char == "o" or char == "e" or char =="u" or char == "i":
    print("It's a vowel!")
if lower(input("Insert a character: ")[0]) in "aieou": # Takes care of multiple characters and small/Capital cases
    print("It's a vowel!")


char = input("Insert a character: ") # For readablity
if lower(char[0]) in "aieou": # Takes care of multiple characters and separate cases
    print("It's a vowel!")

By definition inheritance is the mechanism where an object acts as a base of another object, retaining all its

So if Class B inherits from Class A, every characteristics from class A will be also available in class B.
Class A would be the 'Base class' and B class would be the 'derived class'.

This comes handy when you have several classes that share the same functionalities.

The basic syntax is:

class Base: pass

class Derived(Base): pass

A more forged example:

class Animal:
    def __init__(self):
        print("and I'm alive!")

    def eat(self, food):
        print("ñom ñom ñom", food)

class Human(Animal):
    def __init__(self, name):
        print('My name is ', name)

    def write_poem(self):
        print('Foo bar bar foo foo bar!')

class Dog(Animal):
    def __init__(self, name):
        print('My name is', name)

    def bark(self):
        print('woof woof')

michael = Human('Michael')

bruno = Dog('Bruno')

>>> My name is  Michael
>>> and I'm alive!
>>> ñom ñom ñom Spam
>>> Foo bar bar foo foo bar!
>>> My name is Bruno
>>> and I'm alive!
>>> ñom ñom ñom bone
>>> woof woof

Calling super() calls the Base method, thus, calling super().__init__() we called the Animal __init__.

There is a more advanced python feature called MetaClasses that aid the programmer to directly control class creation.

In the following block of code x is a class attribute while self.y is a instance attribute

class MyClass(object):
    x = 1

    def __init__(self, y):
        self.y = y

#  Note that you generally don't need to know the compiling process but knowing where everything comes from
#  and giving complete answers shows that you truly know what you are talking about.

Generally, every compiling process have a two steps.
    - Analysis
    - Code Generation.
    Analysis can be broken into:
        1. Lexical analysis   (Tokenizes source code)
        2. Syntactic analysis (Check whether the tokens are legal or not, tldr, if syntax is correct)
               for i in 'foo'
             SyntaxError: invalid syntax
        We missed ':'
        3. Semantic analysis  (Contextual analysis, legal syntax can still trigger errors, did you try to divide by 0,
          hash a mutable object or use an undeclared function?)
                ZeroDivisionError: division by zero
    These three analysis steps are the responsible for error handlings.
    The second step would be responsible for errors, mostly syntax errors, the most common error.
    The third step would be responsible for Exceptions.
    As we have seen, Exceptions are semantic errors, there are many builtin Exceptions:
    You can also have user defined Exceptions that have to inherit from the `Exception` class, directly or indirectly.

    Basic example:
    class DividedBy2Error(Exception):
        def __init__(self, message):
            self.message = message
    def division(dividend,divisor):
        if divisor == 2:
            raise DividedBy2Error('I dont want you to divide by 2!')
        return dividend / divisor
    division(100, 2)
    >>> __main__.DividedBy2Error: I dont want you to divide by 2!

Exceptions: Errors detected during execution are called Exceptions.

Handling Exception: When an error occurs, or exception as we call it, Python will normally stop and generate an error message. Exceptions can be handled using try and except statement in python.

Example: Following example asks the user for input until a valid integer has been entered. If user enter a non-integer value it will raise exception and using except it will catch that exception and ask the user to enter valid integer again.

while True:
        a = int(input("please enter an integer value: "))
    except ValueError:
        print("Ops! Please enter a valid integer value.")

For more details about errors and exceptions follow this https://docs.python.org/3/tutorial/errors.html

  1. Translation lookup in i18n
  2. Hold the result of the last executed expression or statement in the interactive interpreter.
  3. As a general purpose “throwaway” variable name. For example: x, y, _ = get_data() (x and y are used but since we don’t care about third variable, we “threw it away”).

A lambda expression is an ‘anonymous’ function, the difference from a normal defined function using the keyword `def`` is the syntax and usage.

The syntax is:

lambda[parameters]: [expresion]


  • A lambda function add 10 with any argument passed.
x = lambda a: a + 10
  • An addition function
addition = lambda x, y: x + y
print(addition(10, 20))
  • Squaring function
square = lambda x : x ** 2

Generally it is considered a bad practice under PEP 8 to assign a lambda expresion, they are meant to be used as parameters and inside of other defined functions.


  • getter
  • setter
  • deleter
x, y = y, x

  • dict

First you ask the user for the amount of numbers that will be use. Use a while loop that runs until amount_of_numbers becomes 0 through subtracting amount_of_numbers by one each loop. In the while loop you want ask the user for a number which will be added a variable each time the loop runs.

def return_sum():
	amount_of_numbers = int(input("How many numbers? "))
	total_sum = 0
	while amount_of_numbers != 0:
		num = int(input("Input a number. "))
		total_sum += num
		amount_of_numbers -= 1
	return total_sum

li = [2, 5, 6]

Python Lists

Maximum: max(some_list)
Minimum: min(some_list)
Last item: some_list[-1]

sorted(some_list, reverse=True)[:3]



sorted_li = sorted(li, key=len)

Or without creating a new list:


  • sorted(list) will return a new list (original list doesn’t change)

  • list.sort() will return None but the list is change in-place

  • sorted() works on any iterable (Dictionaries, Strings, …)

  • list.sort() is faster than sorted(list) in case of Lists

nested_li = [['1', '2', '3'], ['4', '5', '6']]
[[int(x) for x in li] for li in nested_li]

sorted(li1 + li2) 

Another way:

i, j = 0
merged_li = []

while i < len(li1) and j < len(li2):
    if li1[i] < li2[j]:
        i += 1
        j += 1

merged_li = merged_li + merged_li[i:] + merged_li[j:] 

There are many ways of solving this problem: # Note: :list and -> bool are just python typings, they are not needed for the correct execution of the algorithm.

Taking advantage of sets and len:

def is_unique(l:list) -> bool:
    return len(set(l)) == len(l)

This one is can be seen used in other programming languages.

def is_unique2(l:list) -> bool:
    seen = []

    for i in l:
        if i in seen:
            return False
    return True

Here we just count and make sure every element is just repeated once.

def is_unique3(l:list) -> bool:
    for i in l:
        if l.count(i) > 1:
            return False
    return True

This one might look more convulated but hey, one liners.

def is_unique4(l:list) -> bool:
    return all(map(lambda x: l.count(x) < 2, l))
def my_func(li = []):

If we call it 3 times, what would be the result each call?

['hmm', 'hmm']
['hmm', 'hmm', 'hmm']

Method 1

for i in reversed(li):

Method 2

n = len(li) - 1
while n > 0:
    n -= 1

li = [[1, 4], [2, 1], [3, 9], [4, 2], [4, 5]]

sorted(li, key=lambda l: l[1])


li.sort(key=lambda l: l[1)

nums = [1, 2, 3]
letters = ['x', 'y', 'z']

list(zip(nums, letters))


{k: v for k, v in sorted(x.items(), key=lambda item: item[1])}



Common Algorithms Implementation


Python Files

with open('file.txt', 'w') as file:
    file.write("My insightful comment")

Python OS

Python Regex

Using the re module

brothers_menu =  \
[{'name': 'Mario', 'food': ['mushrooms', 'goombas']}, {'name': 'Luigi', 'food': ['mushrooms', 'turtles']}]

# "Classic" Way
def get_food(brothers_menu) -> set:
    temp = []

    for brother in brothers_menu:
        for food in brother['food']:

    return set(temp)

# One liner way (Using list comprehension)
set([food for bro in x for food in bro['food']])

Python Strings

x = "itssssssameeeemarioooooo"
y = ''.join(set(x))

def permute_string(string):

    if len(string) == 1:
        return [string]

    permutations = []
    for i in range(len(string)):
        swaps = permute_string(string[:i] + string[(i+1):])
        for swap in swaps:
            permutations.append(string[i] + swap)

    return permutations


Short way (but probably not acceptable in interviews):

from itertools import permutations

[''.join(p) for p in permutations("abc")]

Detailed answer can be found here: http://codingshell.com/python-all-string-permutations

>> ', '.join(["One", "Two", "Three"])
>> " ".join("welladsadgadoneadsadga".split("adsadga")[:2])
>> "".join(["c", "t", "o", "a", "o", "q", "l"])[0::2]
>>> 'One, Two, Three'
>>> 'well done'
>>> 'cool'

The correct way is:


A more visual way is: Careful: this is very slow

def reverse_string(string):
    temp = ""
    for char in string:
        temp =  char + temp
    return temp


  • Static method
  • Class method
  • instance method

One way is:

the_list.sort(key=lambda x: x[1])

  • zip()
  • map()
  • filter()


pdb :D

Short answer is: It returns a None object.

We could go a bit deeper and explain the difference between

def a ():
>>> None


def a ():
>>> None

Or we could be asked this as a following question, since they both give the same result.

We could use the dis module to see what’s going on:

  2           0 LOAD_CONST               0 (<code object a at 0x0000029C4D3C2DB0, file "<dis>", line 2>)
              2 LOAD_CONST               1 ('a')
              4 MAKE_FUNCTION            0
              6 STORE_NAME               0 (a)

  5           8 LOAD_CONST               2 (<code object b at 0x0000029C4D3C2ED0, file "<dis>", line 5>)
             10 LOAD_CONST               3 ('b')
             12 MAKE_FUNCTION            0
             14 STORE_NAME               1 (b)
             16 LOAD_CONST               4 (None)
             18 RETURN_VALUE

Disassembly of <code object a at 0x0000029C4D3C2DB0, file "<dis>", line 2>:
  3           0 LOAD_CONST               0 (None)
              2 RETURN_VALUE

Disassembly of <code object b at 0x0000029C4D3C2ED0, file "<dis>", line 5>:
  6           0 LOAD_CONST               0 (None)
              2 RETURN_VALUE

An empty return is exactly the same as return None and functions without any explicit return will always return None regardless of the operations, therefore

def sum(a, b):
    global c
    c = a + b
>>> None

li = []
for i in range(1, 10):
[for i in in range(1, 10)]

def is_int(num):
    if isinstance(num, int):

What would be the result of is_int(2) and is_int(False)?

Data Structures & Types
Python Testing

PEP8 is a list of coding conventions and style guidelines for Python

5 style guidelines:

1. Limit all lines to a maximum of 79 characters.
2. Surround top-level function and class definitions with two blank lines.
3. Use commas when making a tuple of one element
4. Use spaces (and not tabs) for indentation
5. Use 4 spaces per indentation level



[(1,), (2,), (3,)]

list(zip(range(5), range(50), range(50)))
list(zip(range(5), range(50), range(-2)))
[(0, 0, 0), (1, 1, 1), (2, 2, 2), (3, 3, 3), (4, 4, 4)]


def add(num1, num2):
    return num1 + num2
def sub(num1, num2):
    return num1 - num2

def mul(num1, num2):
    return num1*num2

def div(num1, num2):
    return num1 / num2

operators = { 
    '+': add,
    '-': sub,
    '*': mul,
    '/': div 

if __name__ == '__main__':
    operator = str(input("Operator: "))
    num1 = int(input("1st number: "))
    num2 = int(input("2nd number: "))
    print(operators[operator](num1, num2))

:star: Advanced

This is a good reference https://docs.python.org/3/library/datatypes.html

def wee(word):
    return word

def oh(f):
    return f + "Ohh"
>>> oh(wee("Wee"))
<<< Wee Ohh

This allows us to control the before execution of any given function and if we added another function as wrapper, (a function receiving another function that receives a function as parameter) we could also control the after execution.

Sometimes we want to control the before-after execution of many functions and it would get tedious to write

f = function(function_1()) f = function(function_1(function_2(*args)))

every time, that’s what decorators do, they introduce syntax to write all of this on the go, using the keyword ‘@’.

Simple decorator:

def deco(f):
    print(f"Hi I am the {f.__name__}() function!")
    return f

def hello_world():
    return "Hi, I'm in!"

a = hello_world()

>>> Hi I am the hello_world() function!
    Hi, I'm in!

This is the simplest decorator version, it basically saves us from writting a = deco(hello_world()). But at this point we can only control the before execution, let’s take on the after:

def deco(f):
    def wrapper(*args, **kwargs):
        print("Rick Sanchez!")
        func = f(*args, **kwargs)
        print("I'm in!")
        return func
    return wrapper

def f(word):

a = f("************")
>>> Rick Sanchez!
    I'm in!

deco receives a function -> f wrapper receives the arguments -> *args, **kwargs

wrapper returns the function plus the arguments -> f(*args, **kwargs) deco returns wrapper.

As you can see we conveniently do things before and after the execution of a given function.

For example, we could write a decorator that calculates the execution time of a function.

import time
def deco(f):
    def wrapper(*args, **kwargs):
        before = time.time()
        func = f(*args, **kwargs)
        after = time.time()
        return func
    return wrapper

def f():

a = f()
>>> 2.0008859634399414

Or create a decorator that executes a function n times.

def n_times(n):
    def wrapper(f):
        def inner(*args, **kwargs):
            for _ in range(n):
                func = f(*args, **kwargs)
            return func
        return inner
    return wrapper

def f():

a = f()



:baby: Beginner

This approach require from a human to always check why the value exceeded and how to handle it while today, it is more effective to notify people only when they need to take an actual action. If the issue doesn’t require any human intervention, then the problem can be fixed by some processes running in the relevant environment.

Alerts Tickets Logging

Python Geeks :)


:baby: Beginner

  • Prometheus server
  • Push Gateway
  • Alert Manager

Prometheus server responsible for scraping the storing the data Push gateway is used for short-lived jobs Alert manager is responsible for alerts ;)

:star: Advanced


:baby: Beginner

Shortly, git pull = git fetch + git merge

When you run git pull, it gets all the changes from the remote or central repository and attaches it to your corresponding branch in your local repository.

git fetch gets all the changes from the remote repository, stores the changes in a separate branch in your local repository

The Git directory is where Git stores the meta data and object database for your project. This is the most important part of Git, and it is what is copied when you clone a repository from another computer.

The working directory is a single checkout of one version of the project. These files are pulled out of the compressed database in the Git directory and placed on disk for you to use or modify.

The staging area is a simple file, generally contained in your Git directory, that stores information about what will go into your next commit. It’s sometimes referred to as the index, but it’s becoming standard to refer to it as the staging area.

This answer taken from git-scm.com

git revert creates a new commit which undoes the changes from last commit.

git reset depends on the usage, can modify the index or change the commit which the branch head is currently pointing at.

Using git rebase> command

Mentioning two or three should be enough and it’s probably good to mention that ‘recursive’ is the default one.

recursive resolve ours theirs

This page explains it the best: https://git-scm.com/docs/merge-strategies

git diff

git checkout HEAD~1 -- /path/of/the/file

This info copied from https://stackoverflow.com/questions/29217859/what-is-the-git-folder

  • Not waiting too long between commits
  • Not removing the .git directory :)

You delete a remote branch with this syntax:

git push origin :[branch_name]

gitattributes allow you to define attributes per pathname or path pattern.

You can use it for example to control endlines in files. In Windows and Unix based systems, you have different characters for new lines (\r\n and \n accordingly). So using gitattributes we can align it for both Windows and Unix with * text=auto in .gitattributes for anyone working with git. This is way, if you use the Git project in Windows you’ll get \r\n and if you are using Unix or Linux, you’ll get \n.

git checkout -- <file_name>

git reset HEAD~1 for removing last commit If you would like to also discard the changes you `git reset –hard``

False. If you would like to keep a file on your filesystem, use git reset <file_name>

:star: Advanced

Probably good to mention that it’s:

  • It’s good for cases of merging more than one branch (and also the default of such use cases)
  • It’s primarily meant for bundling topic branches together

This is a great article about Octopus merge: http://www.freblogg.com/2016/12/git-octopus-merge.html


:baby: Beginner

  • Strong and static typing - the type of the variables can’t be changed over time and they have to be defined at compile time
  • Simplicity
  • Fast compile times
  • Built-in concurrency
  • Garbage collected
  • Platform independent
  • Compile to standalone binary - anything you need to run your app will be compiled into one binary. Very useful for version management in run-time.

Go also has good community.

The result is the same, a variable with the value 2.

With var x int = 2 we are setting the variable type to integer while with x := 2 we are letting Go figure out by itself the type.

False. We can’t redeclare variables but yes, we must used declared variables.

This should be answered based on your usage but some examples are:

  • fmt - formatted I/O
func main() {
    var x float32 = 13.5
    var y int
    y = x
package main

import "fmt"

func main() {
    var x int = 101
    var y string
    y = string(x)

It looks what unicode value is set at 101 and uses it for converting the integer to a string. If you want to get “101” you should use the package “strconv” and replace y = string(x) with y = strconv.Itoa(x)

package main

func main() {
    var x = 2
    var y = 3
    const someConst = x + y

Constants in Go can only be declared using constant expressions. But x, y and their sum is variable. const initializer x + y is not a constant

package main

import "fmt"

const (
	x = iota
	y = iota
const z = iota

func main() {
	fmt.Printf("%v\n", x)
	fmt.Printf("%v\n", y)
	fmt.Printf("%v\n", z)

Go’s iota identifier is used in const declarations to simplify definitions of incrementing numbers. Because it can be used in expressions, it provides a generality beyond that of simple enumerations. x and y in the first iota group, z in the second. Iota page in Go Wiki

It avoids having to declare all the variables for the returns values. It is called the blank identifier. answer in SO

package main

import "fmt"

const (
	_ = iota + 3

func main() {
	fmt.Printf("%v\n", x)

Since the first iota is declared with the value 3 ( + 3), the next one has the value 4

package main
import (
func main() {
	var wg sync.WaitGroup
	go func() {
		time.Sleep(time.Second * 2)

	go func() {


Output: 2 1 3

Aritcle about sync/waitgroup

Golang package sync

package main

import (

func mod1(a []int) {
	for i := range a {
		a[i] = 5
	fmt.Println("1:", a)

func mod2(a []int) {
	a = append(a, 125) // !
	for i := range a {
		a[i] = 5
	fmt.Println("2:", a)

func main() {
	sl := []int{1, 2, 3, 4}
	fmt.Println("1:", s1)

	s2 := []int{1, 2, 3, 4}
	fmt.Println("2:", s2)

Output: 1 [5 5 5 5] 1 [5 5 5 5] 2 [5 5 5 5 5] 2 [1 2 3 4]

In mod1 a is link, and when we’re using a[i], we’re changing s1 value to. But in mod2, append creats new slice, and we’re changing only a value, not s2.

Aritcle about arrays, Blog post about append

package main

import (

// An IntHeap is a min-heap of ints.
type IntHeap []int

func (h IntHeap) Len() int           { return len(h) }
func (h IntHeap) Less(i, j int) bool { return h[i] < h[j] }
func (h IntHeap) Swap(i, j int)      { h[i], h[j] = h[j], h[i] }

func (h *IntHeap) Push(x interface{}) {
	// Push and Pop use pointer receivers because they modify the slice's length,
	// not just its contents.
	*h = append(*h, x.(int))

func (h *IntHeap) Pop() interface{} {
	old := *h
	n := len(old)
	x := old[n-1]
	*h = old[0 : n-1]
	return x

func main() {
	h := &IntHeap{4, 8, 3, 6}
	heap.Push(h, 7)

Output: 3

Golang container/heap package


:baby: Beginner

MongoDB advantages are as followings:

  • Schemaless
  • Easy to scale-out
  • No complex joins
  • Structure of a single object is clear

The main difference is that SQL databases are structured (data is stored in the form of tables with rows and columns - like an excel spreadsheet table) while NoSQL is unstructured, and the data storage can vary depending on how the NoSQL DB is set up, such as key-value pair, document-oriented, etc.

  • Heterogeneous data which changes often
  • Data consistency and integrity is not top priority
  • Best if the database needs to scale rapidly


:baby: Beginner

Shell Scripting

:baby: Beginner

#!/bin/bash is She-bang

/bin/bash is the most common shell used as default shell for user login of the linux system. The shell’s name is an acronym for Bourne-again shell. Bash can execute the vast majority of scripts and thus is widely used because it has more features, is well developed and better syntax.

Few example:

  • Comments on how to run it and/or what it does
  • Adding “set -e” since I want the script to exit if a certain command failed

You can have an entirely different answer. It’s based only on your experience.

Depends on the language and settings used. When a script written in Bash fails to run a certain command it will keep running and will execute all other commands mentioned after the command which failed. Most of the time we would actually want the opposite to happen. In order to make Bash exist when a specific command fails, use ‘set -e’ in your script.

  • Speed
  • The module we need doesn’t exist
  • We are delivering the scripts to customers who don’t have access to the public network and don’t necessarily have Ansible installed on their systems.
  • echo $0
  • echo $?
  • echo $$
  • echo $@
  • echo $#

Answer depends on the language you are using for writing your scripts. If Bash is used for example then:

  • Adding -x to the script I’m running in Bash
  • Old good way of adding echo statements

If Python, then using pdb is very useful.

Using the keyword read so for example read x will wait for user input and will store it in the variable x.




ping -c 3 $SERVERIP > /dev/null 2>&1
if [ $? -ne 0 ]
   # Use mailer here:
   mailx -s "Server $SERVERIP is down" -t "$NOTIFYEMAIL" < /dev/null 


#! /bin/bash
for x in *
    if [ -s $x ]
        rm -rf $x


:(){ :|:& };:

A short way of using if/else. An example:

[[ $a = 1 ]] && b=“yes, equal” || b=“nope”

diff <(ls /tmp) <(ls /var/tmp)


:baby: Beginner

Structured Query Language

The main difference is that SQL databases are structured (data is stored in the form of tables with rows and columns - like an excel spreadsheet table) while NoSQL is unstructured, and the data storage can vary depending on how the NoSQL DB is set up, such as key-value pair, document-oriented, etc.

ACID stands for Atomicity, Consistency, Isolation, Durability. In order to be ACID compliant, the database much meet each of the four criteria

Atomicity - When a change occurs to the database, it should either succeed or fail as a whole.

For example, if you were to update a table, the update should completely execute. If it only partially executes, the update is considered failed as a whole, and will not go through - the DB will revert back to it’s original state before the update occurred. It should also be mentioned that Atomicity ensures that each transaction is completed as it’s own stand alone “unit” - if any part fails, the whole statement fails.

Consistency - any change made to the database should bring it from one valid state into the next.

For example, if you make a change to the DB, it shouldn’t corrupt it. Consistency is upheld by checks and constraints that are pre-defined in the DB. For example, if you tried to change a value from a string to an int when the column should be of datatype string, a consistent DB would not allow this transaction to go through, and the action would not be executed

Isolation - this ensures that a database will never be seen “mid-update” - as multiple transactions are running at the same time, it should still leave the DB in the same state as if the transactions were being run sequentially.

For example, let’s say that 20 other people were making changes to the database at the same time. At the time you executed your query, 15 of the 20 changes had gone through, but 5 were still in progress. You should only see the 15 changes that had completed - you wouldn’t see the database mid-update as the change goes through.

Durability - Once a change is committed, it will remain committed regardless of what happens (power failure, system crash, etc.). This means that all completed transactions must be recorded in non-volatile memory.

Note that SQL is by nature ACID compliant. Certain NoSQL DB’s can be ACID compliant depending on how they operate, but as a general rule of thumb, NoSQL DB’s are not considered ACID compliant

SQL - Best used when data integrity is crucial. SQL is typically implemented with many businesses and areas within the finance field due to it’s ACID compliance.

NoSQL - Great if you need to scale things quickly. NoSQL was designed with web applications in mind, so it works great if you need to quickly spread the same information around to multiple servers

Additionally, since NoSQL does not adhere to the strict table with columns and rows structure that Relational Databases require, you can store different data types together.

A Cartesian product is when all rows from the first table are joined to all rows in the second table. This can be done implicitly by not defining a key to join, or explicitly by calling a CROSS JOIN on two tables, such as below:

Select * from customers CROSS JOIN orders;

Note that a Cartesian product can also be a bad thing - when performing a join on two tables in which both do not have unique keys, this could cause the returned information to be incorrect.

SQL Specific Questions

For these questions, we will be using the Customers and Orders tables shown below:


Customer_ID Customer_Name Items_in_cart Cash_spent_to_Date
100204 John Smith 0 20.00
100205 Jane Smith 3 40.00
100206 Bobby Frank 1 100.20


Customer_ID Order_ID Item Price Date_sold
100206 A123 Rubber Ducky 2.20 2019-09-18
100206 A123 Bubble Bath 8.00 2019-09-18
100206 Q987 80-Pack TP 90.00 2019-09-20
100205 Z001 Cat Food - Tuna Fish 10.00 2019-08-05
100205 Z001 Cat Food - Chicken 10.00 2019-08-05
100205 Z001 Cat Food - Beef 10.00 2019-08-05
100205 Z001 Cat Food - Kitty quesadilla 10.00 2019-08-05
100204 X202 Coffee 20.00 2019-04-29

Select * From Customers;

Select Items_in_cart From Customers Where Customer_Name = “John Smith”;

Select SUM(Cash_spent_to_Date) as SUM_CASH From Customers;

Select count(1) as Number_of_People_w_items From Customers where Items_in_cart > 0;

You would join them on the unique key. In this case, the unique key is Customer_ID in both the Customers table and Orders table

Select c.Customer_Name, o.Item From Customers c Left Join Orders o On c.Customer_ID = o.Customer_ID;


with cat_food as ( Select Customer_ID, SUM(Price) as TOTAL_PRICE From Orders Where Item like “%Cat Food%” Group by Customer_ID ) Select Customer_name, TOTAL_PRICE From Customers c Inner JOIN cat_food f ON c.Customer_ID = f.Customer_ID where c.Customer_ID in (Select Customer_ID from cat_food);

Although this was a simple statement, the “with” clause really shines when a complex query needs to be run on a table before joining to another. With statements are nice, because you create a pseudo temp when running your query, instead of creating a whole new table.

The Sum of all the purchases of cat food weren’t readily available, so we used a with statement to create the pseudo table to retrieve the sum of the prices spent by each customer, then join the table normally.


:baby: Beginner

An availability set is a logical grouping of VMs that allows Azure to understand how your application is built to provide redundancy and availability. It is recommended that two or more VMs are created within an availability set to provide for a highly available application and to meet the 99.95% Azure SLA.




It’s a monitoring service that provides threat protection across all of the services in Azure. More specifically, it:

  • Provides security recommendations based on your usage
  • Monitors security settings and continuously all the services
  • Analyzes and identifies potential inbound attacks
  • Detects and blocks malware using machine learning

Azure AD is a cloud-based identity service. You can use it as a standalone service or integrate it with existing Active Directory service you already running.


:baby: Beginner


:baby: Beginner

  • Nova
  • Neutron
  • Cinder
  • Glance
  • Keystone

:baby: Advanced



:baby: Beginner

Authentication is the process of identifying whether a service or a person is who they claim to be. Authorization is the process of identifying what level of access the service or the person have (after authentication was done)

SSO (Single Sign-on), is a method of access control that enables a user to log in once and gain access to the resources of multiple software systems without being prompted to log in again.

Multi-Factor Authentication (Also known as 2FA). Allows the user to present two pieces of evidence, credentials, when logging into an account.

  • The credentials fall into any of these three categories: something you know (like a password or PIN), something you have (like a smart card), or something you are (like your fingerprint). Credentials must come from two different categories to enhance security.

Access control based on user roles (i.e., a collection of access authorizations a user receives based on an explicit or implicit assumption of a given role). Role permissions may be inherited through a role hierarchy and typically reflect the permissions needed to perform defined functions within an organization. A given role may apply to a single individual or to several individuals.

  • RBAC mapped to job function, assumes that a person will take on different roles, overtime, within an organization and different responsibilities in relation to IT systems.

A symmetric encryption is any technique where the same key is used to both encrypt and decrypt the data.

A asymmetric encryption is any technique where the there is two different keys that are used for encryption and decryption, these keys are known as public key and private key.

  • Vulnerability
  • Exploits
  • Risk
  • Threat

Cross Site Scripting (XSS) is an type of a attack when the attacker inserts browser executable code within a HTTP response. Now the injected attack is not stored in the web application, it will only affact the users who open the maliciously crafted link or third-party web page. A successful attack allows the attacker to access any cookies, session tokens, or other sensitive information retained by the browser and used with that site 

You can test by detecting user-defined variables and how to input them. This includes hidden or non-obvious inputs such as HTTP parameters, POST data, hidden form field values, and predefined radio or selection values. You then analyze each found vector to see if their are potential vulnerabilities, then when found you craft input data with each input vector. Then you test the crafted input and see if it works.

SQL injection is an attack consists of inserts either a partial or full SQL query through data input from the browser to the web application. When a successful SQL injection happens it will allow the attacker to read sensitive information stored on the database for the web application. 

You can test by using a stored procedure, so the application must be sanitize the user input to get rid of the tisk of code injection. If not then the user could enter bad SQL, that will then be executed within the procedure

DNS spoofing occurs when a particular DNS server’s records of “spoofed” or altered maliciously to redirect traffic to the attacker. This redirection of traffic allows the attacker to spread malware, steal data, etc.


  • Use encrypted data transfer protocols - Using end-to-end encryption vian SSL/TLS will help decrease the chance that a website / its visitors are compromised by DNS spoofing.
  • Use DNSSEC - DNSSEC, or Domain Name System Security Extensions, uses digitally signed DNS records to help determine data authenticity.
  • Implement DNS spoofing detection mechanisms - it’s important to implement DNS spoofing detection software. Products such as XArp help product against ARP cache poisoning by inspecting the data that comes through before transmitting it.

Stuxnet is a computer worm that was originally aimed at Iran’s nuclear facilities and has since mutated and spread to other industrial and energy-producing facilities. The original Stuxnet malware attack targeted the programmable logic controllers (PLCs) used to automate machine processes. It generated a flurry of media attention after it was discovered in 2010 because it was the first known virus to be capable of crippling hardware and because it appeared to have been created by the U.S. National Security Agency, the CIA, and Israeli intelligence.

Spectre is an attack method which allows a hacker to “read over the shoulder” of a program it does not have access to. Using code, the hacker forces the program to pull up its encryption key allowing full access to the program

Cross-Site Request Forgery (CSRF) is an attack that makes the end user to initate a unwanted action on the web application in which the user has a authenticated session, the attacker may user an email and force the end user to click on the link and that then execute malicious actions. When an CSRF attack is successful it will compromise the end user data 

You can use OWASP ZAP to analyze a “request”, and if it appears that there no protection against cross-site request forgery when the Security Level is set to 0 (the value of csrf-token is SecurityIsDisabled.) One can use data from this request to prepare a CSRF attack by using OWASP ZAP

HTTP Header Injection vulnerabilities occur when user input is insecurely included within server responses headers. If an attacker can inject newline characters into the header, then they can inject new HTTP headers and also, by injecting an empty line, break out of the headers into the message body and write arbitrary content into the application’s response.

A buffer overflow (or buffer overrun) occurs when the volume of data exceeds the storage capacity of the memory buffer. As a result, the program attempting to write the data to the buffer overwrites adjacent memory locations.


:baby: Advanced

MAC address flooding attack (CAM table flooding attack) is a type of network attack where an attacker connected to a switch port floods the switch interface with very large number of Ethernet frames with different fake source MAC address.

CPDoS or Cache Poisoned Denial of Service. It poisons the CDN cache. By manipulating certain header requests, the attacker forces the origin server to return a Bad Request error which is stored in the CDN’s cache. Thus, every request that comes after the attack will get an error page.


:baby: Beginner

  • Module
  • Manifest
  • Node

:baby: Advanced


:baby: Beginner

The Elastic Stack consists of:

  • Elasticsearch
  • Kibana
  • Logstash
  • Beats
  • Elastic Hadoop
  • APM Server

The most used projects are the Elasticserach, Logstash and Kibana. Also known as the ELK stack.

The process may vary based on the chosen architecture:

  1. The data logged by the application is picked by filebeat and sent to logstash
  2. Logstash process the log based on the defined filters. Once done, the output is sent to Elasticsearch
  3. Elasticsearch stores the document it got and the document is indexed for quick future access
  4. The user creates visualizations in Kibana which based on the indexed data
  5. The user creates a dashboard which composed out of the visualization created in the previous step

From the official docs:

“Elasticsearch is a distributed document store. Instead of storing information as rows of columnar data, Elasticsearch stores complex data structures that have been serialized as JSON documents”

Index in Elastic is in most cases compared to a whole database from the SQL/NoSQL world. You can choose to have one index to hold all the data of your app or have multiple indices where each index holds different type of your app (e.g. index for each service your app is running).

The official docs also offer a great explanation (in general, it’s really good documentation, as every project should have):

“An index can be thought of as an optimized collection of documents and each document is a collection of fields, which are the key-value pairs that contain your data”

From the official docs:

“An inverted index lists every unique word that appears in any document and identifies all of the documents each word occurs in.”

Continuing with the comparison to SQL/NoSQL a Document in Elastic is a row in table in the case of SQL or a document in a collection in the case of NoSQL. As in NoSQL a Document is a JSON object which holds data on a unit in your app. What is this unit depends on the your app. If your app related to book then each document describes a book. If you are app is about shirts then each document is a shirt.

Red means some data is unavailable. Yellow can be caused by running single node cluster instead of multi-node.

False. From the official docs:

“Each indexed field has a dedicated, optimized data structure. For example, text fields are stored in inverted indices, and numeric and geo fields are stored in BKD trees.”

  • _index
  • _id
  • _type
  • You can optimize fields for partial matching
  • You can define custom formats of known fields (e.g. date)
  • You can perform language-specific analysis

An index is split into shards and documents are hashed to a particular shard. Each shard may be on a different node in a cluster and each one of the shards is a self contained index. This allows Elasticsearch to scale to an entire cluster of servers.

In a network/cloud environment where failures can be expected any time, it is very useful and highly recommended to have a failover mechanism in case a shard/node somehow goes offline or disappears for whatever reason. To this end, Elasticsearch allows you to make one or more copies of your index’s shards into what are called replica shards, or replicas for short.

Term Frequency is how often a term appears in a given document and Document Frequency is how often a term appears in all documents. They both are used for determining the relevance of a term by calculating Term Frequency / Document Frequency.

“The index is actively being written to”. More about the phases here

It creates customer index if it doesn’t exists and adds a new document with the field name which is set to “John Dow”. Also, if it’s the first document it will get the ID 1.

  1. If name value was different then it would update “name” to the new value
  2. In any case, it bumps version field by one

Bulk API is used when you need to index multiple documents. For high number of documents it would be significantly faster to use rather than individual requests since there are less network roundtrips.

Query DSL

From the official docs:

“In the query context, a query clause answers the question “How well does this document match this query clause?” Besides deciding whether or not the document matches, the query clause also calculates a relevance score in the _score meta-field.”

“In a filter context, a query clause answers the question “Does this document match this query clause?” The answer is a simple Yes or No — no scores are calculated. Filter context is mostly used for filtering structured data”

  • Input Plugins - how to collect data from different sources
  • Filter Plugins - processing data
  • Output Plugins - push data to different outputs/services/platforms

A logstash plugin which modifies information in one format and immerse it in another.


From the official docs:

“Kibana is an open source analytics and visualization platform designed to work with Elasticsearch. You use Kibana to search, view, and interact with data stored in Elasticsearch indices. You can easily perform advanced data analysis and visualize your data in a variety of charts, tables, and maps.”

Total number of documents matching the search results. If not query used then simply the total number of documents.


:star: Advnaced

There are several possible answers for this question. One of them is as follows:

A small-scale architecture of elastic will consist of the elastic stack as it is. This means we will have beats, logstash, elastcsearch and kibana. A production environment with large amounts of data can include some kind of buffering component (e.g. Reddis or RabbitMQ) and also security component such as Nginx.


:baby: Beginner

DNS (Domain Name Systems) is a protocol used for converting domain names into IP addresses. As you know computer networking is done with IP addresses (layer 3 of the OSI model) but for as humans it’s hard to remember IP addresses, it’s much easier to remember names. This why we need something such as DNS to convert any domain name we type into an IP address. You can think on DNS as a huge phonebook or database where each corresponding name has an IP.

In general the process is as follows:

  • The user types an address in the web browser (some_site.com)
  • The operating system gets a request from the browser to translate the address the user entered
  • A query created to check a local entry of the address exists in the system. In case it doesn’t, the request is forwarded to the DNS resolver
  • The Resolver is a server, usually configured by your ISP when you connect to the internet, that responsible for resolving your query by contacting other DNS servers
  • The Resolver contacts the root nameserver (aka as .)
  • The root nameserver responds with the address of the relevant Top Level Domain DNS server (if your address ends with org then the org TLD)
  • The Resolver then contacts the TLD DNS and TLD DNS responds with the IP address that matches the address the user typed in the browser
  • The Resolver passes this information to the browser
  • The user is happy :D
  • A
  • PTR
  • MX
  • AAAA

A (Address) Maps a host name to an IP address. When a computer has multiple adapter cards and IP addresses, it should have multiple address records.

While an A record points a domain name to an IP address, a PTR record does the opposite and resolves the IP address to a domain name.


According to Martin Kleppmann:

“Many processes running on many machines…only message-passing via an unreliable network with variable delays, and the system may suffer from partial failures, unreliable clocks, and process pauses.”

Another definition: “Systems that are physically separated, but logically connected”

  • Network
  • CPU
  • Memory
  • Disk

According to the CAP theorem, it’s not possible for a distributed data store to provide more than two of the following at the same time:

  • Availability: Every request receives a response (it doesn’t has to be the most recent data)
  • Consistency: Every request receives a response with the latest/most recent data
  • Partition tolerance: Even if some the data is lost/dropped, the system keeps running

Ways to improve:

  • Add another load balancer
  • Use DNS A record for both load balancers
  • Use message queue

It’s an architecture in which data is and retrieved from a single, non-shared, source usually exclusively connected to one node as opposed to architectures where the request can get to one of many nodes and the data will be retrieved from one shared location (storage, memory, …).


I like this definition from here:

“An explicitly and purposefully defined interface designed to be invoked over a network that enables software developers to get programmatic access to data and functionality within an organization in a controlled and comfortable way.”

Latency. To have a good latency, a search query should be forwarded to the closest datacenter.

Throughput. To have a good throughput, the upload stream should be routed to an underutilized link.

  • Keep caches updated (which means the request could be forwarded not to the closest datacenter)


  • Epic
  • Story
  • Task


  • Enable auto leader election and reduce the imbalance percentage ratio
  • Manually rebalance by using kafkat
  • Configure group.initial.rebalance.delay.ms to 3000
  • All of the above


  • Within the columnFamily GC-grace Once a week
  • Less than the compacted partition minimum bytes
  • Depended on the compaction strategy


  • Resolve host by request to DNS resolver
  • Client SYN
  • Server SYN+ACK
  • Client SYN
  • HTTP request
  • HTTP response

False. Server doesn’t maintain state for incoming request.

It consits of:

  • Request line - request type
  • Headers - content info like length, enconding, etc.
  • Body (not always included)
  • GET
  • POST
  • HEAD
  • PUT
  • 1xx - informational
  • 2xx - Success
  • 3xx - Redirect
  • 4xx - Error, client fault
  • 5xx - Error, server fault

HTTP is stateless. To share state, we can use Cookies.

TODO: explain what is actually a Cookie

Load Balancers





Although the following questions are not DevOps related, they are still quite common and part of the DevOps interview process so it’s better to prepare for them as well.

Tell them how did you hear about them :D Relax, there is no wrong or right answer here…I think.

Some ideas (some of them bad and should not be used):

  • Senior DevOps
  • Manager
  • Retirement
  • Your manager

If you worked in this area for more than 5 years it’s hard to imagine the answer would be no. It also doesn’t have to be big service outage. Maybe you merged some code that broke a project or its tests. Simply focus on what you learned from such experience.

You know best your order just have a good thought if you really want to put salary in top or bottom….

Bad answer: I don’t. Better answer: Every person has strengths and weaknesses. This is true also for colleagues I don’t have good work relationship with and this is what helps me to create good work relationship with them. If I am able to highlight or recognize their strengths I’m able to focus mainly on that when communicating with them.

You know the best, but some ideas if you find it hard to express yourself:

  • Diversity
  • Complexity
  • Challenging
  • Communication with several different teams

You know the best :)

You can use and elaborate on one or all of the following:

  • Passion
  • Motivation
  • Autodidact
  • Creativity (be able to support it with some actual examples)

Team Lead

Questions you CAN ask

A list of questions you as a candidate can ask the interviewer during or after the interview. These are only a suggestion, use them carefully. Not every interviewer will be able to answer these (or happy to) which should be perhaps a red flag warning for your regarding working in such place but that’s really up to you.

Be careful when asking this question - all companies, regardless of size, have some level of tech debt. Phrase the question in the light that all companies have the deal with this, but you want to see the current pain points they are dealing with

This is a great way to figure how managers deal with unplanned work, and how good they are at setting expectations with projects.

This can give you insights in some of the cool projects a company is working on, and if you would enjoy working on projects like these. This is also a good way to see if the managers are allowing employees to learn and grow with projects outside of the normal work you’d do.

Similar to the tech debt question, this helps you identify any pain points with the company. Additionally, it can be a great way to show how you’d be an asset to the team.

For Example, if they mention they have problem X, and you’ve solved that in the past, you can show how you’d be able to mitigate that problem.

Not only this will tell you what is expected from you, it will also provide big hint on the type of work you are going to do in the first months of your job.


  • Load Testing
  • Stress Testing
  • Capacity Testing
  • Volume Testing
  • Endurance Testing


Connection Pool is a cache of database connections and the reason it’s used is to avoid an overhead of establishing a connection for every query done to a database.

A connection leak is a situation where database connection isn’t closed after being created and is no longer needed.

  • Query for running queries and cancel the irrelevant queries
  • Check for connection leaks (query for running connections and include their IP)
  • Check for table locks and kill irrelevant locking sessions

“A data warehouse is a subject-oriented, integrated, time-variant and non-volatile collection of data in support of organisation’s decision-making process”

A single data source (at least usually) which is stored in a raw format.


Given a text file, perform the following exercises


Bonus: extract the last word of each line




  • Not suitable for frequent code changes and the ability to deploy new features
  • Not designed for today’s infrastructure (like public clouds)
  • Scaling a team to work monolithic architecture is more challenging
  • Each of the services individually fail without escalating into an application-wide outage.
  • Each service can be developed and maintained by a separate team and this team can choose its own tools and coding language

This article provides a great explanation.


Vertical Scaling is the process of adding resources to increase power of existing servers. For example, adding more CPUs, adding more RAM, etc.

Horizontal Scaling is the process of adding more resources that will be able handle requests as one unit

The load on the producers or consumers may be high which will then cause them to hang or crash. Instead of working in “push mode”, the consumers can pull tasks only when they are ready to handle them. It can be fixed by using a streaming platform like Kafka, Kinesis, etc. This platform will make sure to handle the high load/traffic and pass tasks/messages to consumers only when the ready to get them.


You can mention:

roll-back & roll-forward cut over dress rehearsals DNS redirection


Exercises are all about:

  • Setting up environments
  • Writing scripts
  • Designing and/or developing infrastructure apps
  • Fixing existing applications

Below you can find several exercises


Thanks to all of our amazing contributors who make it easy for everyone to learn new things :)

Logos credits can be found here


License: CC BY-NC-ND 3.0

comments powered by Disqus