Random Interview Questions

A recount of my interview questions…

LiveRunGrow
68 min readOct 6, 2020

When will you use NoSQL and SQL?

Scalability:

  • SQL is designed for vertical scaling where we increase the computation power of a single server machine. This means that we are limited in this aspect. If we want to scale horizontally, then it involves much more effort in partitioning, sharding, clustering etc. SQL joins have bad time complexity.
  • On the other hand, with NoSQL, we can simply add one more node as per needed.

Acid Compliance:

  • SQL ensures ACID compliance.
  • NoSQL may compromise on ACID. For eg, lazy writes… follows the BASE consistency model, which means: Basic Availability: This means that while the database guarantees the availability of the data, the database may fail to obtain the requested data or the data may be in a changing or inconsistent state. Soft state: The state of the database can be changing over time. Eventual consistency: The database will eventually become consistent, and data will propagate everywhere at some point in the future.

Complexity of Queries:

  • SQL supports relations between data types. It efficiently executes queries and retrieves and edits data quickly.
  • A NoSQL database provides a ton of flexibility in the types of data that you can store, but because of the potentially large differences in data structures, querying isn’t as efficient as with a SQL database. You will have to perform extra processing on the data.

Data Schema:

  • NoSQL allows for fast change to the database schema. The data you store in a NoSQL database does not need a predefined schema like you do for a SQL database. Rather, the data can be column stores, document-oriented, graph-based, or key-value pairs. This provides much more flexibility and less upfront planning when managing your database.
  • SQL requires well defined schemas.

Difference between B+ Tree and BST

  • In BST -> Each node only can have 1/2 children. Each node only can contain 1 key value.
  • B+ Tree can have more than 1 key values and hence more than 1 children. The time to search for a value takes more time than BST.

Why use B+ Tree when you can use BST?

One important advantage of B+ trees is that they are designed to work efficiently with disk-based storage systems, where data is stored on disk rather than in memory. B+ trees are optimized for minimizing the number of disk accesses required to access a particular piece of data, which is critical for performance in these systems. In contrast, BSTs are designed for in-memory use and may not perform as well in disk-based systems.

  • Sorted
  • One node can hold many keys and pointers, so a single disk read can bring in a large amount of data (consistently)

Another advantage of B+ trees is that they can efficiently handle range queries, where the data is searched for a range of values rather than a single value. B+ trees store the data in sorted order, which makes it easy to traverse the tree and find all of the values in a particular range. In contrast, range queries can be more difficult to perform efficiently with BSTs, as the tree may need to be traversed multiple times to find all of the values in the range.

Additionally, B+ trees are more space-efficient than BSTs for large datasets, as they can store more values in each node of the tree. This reduces the overall height of the tree and reduces the number of disk accesses required to access a particular piece of data.

Overall, while BSTs can be useful for certain types of in-memory indexing tasks, B+ trees are a more versatile and efficient data structure for many real-world database applications, especially those involving large datasets and disk-based storage systems.

Difference between B- Tree and B+ Tree

B+ Trees are different from B Trees with following two properties:

  1. B+ trees don’t store data pointer in interior nodes, they are ONLY stored in leaf nodes. This is not optional as in B-Tree. This means that interior nodes can fit more keys on block of memory. Because B+ trees don’t have data associated with interior nodes, more keys can fit on a page of memory. Therefore, it will require fewer cache misses in order to access data that is on a leaf node.
  2. The leaf nodes of B+ trees are linked, so doing a linear scan of all keys will requires just one pass through all the leaf nodes. A B tree, on the other hand, would require a traversal of every level in the tree. This property can be utilized for efficient search as well since data is stored only in leafs. The leaf nodes of B+ trees are linked, so doing a full scan of all objects in a tree requires just one linear pass through all the leaf nodes. A B tree, on the other hand, would require a traversal of every level in the tree. This full-tree traversal will likely involve more cache misses than the linear traversal of B+ leaves.
  3. In both B+ and B- Tree, the leaf nodes has to be at the same level. But BST is not.

B+ trees use some clever balancing techniques to make sure that all of the leaves are always on the same level of the tree, that each node is always at least half full (rounded) of keys, and (therefore) that the height of the tree is always at most ceiling(log(k)) of base ceiling(n/2) where k is the number of values in the tree and n is the maximum number of pointers (= maximum number of nodes + 1) in each block. This means that only a small number of pointer traversals is necessary to search for a value if the number of keys in a node is large.

  • The root node in a B+-Tree can have a minimum of one key and two pointers.

What happens when the Hash table is full? Why do you want to double the size? Why not +100?

They work like this: when the table becomes x% full, you create a new hash table that is (say) double the size, and move all the data into the new hash table by rehashing all of the elements that are stored in it. The downside is you have to rehash all the values, which is an O(n) [aka linear, scanning, time consuming] process.

(From a stack overflow answer)

Hash-tables could not claim “amortized constant time insertion” if, for instance, the resizing was by a constant increment. In that case the cost of resizing (which grows with the size of the hash-table) would make the cost of one insertion linear in the total number of elements to insert. Because resizing becomes more and more expensive with the size of the table, it has to happen “less and less often” to keep the amortized cost of insertion constant.

Most implementations allow the average bucket occupation to grow to until a bound fixed in advance before resizing (anywhere between 0.5 and 3, which are all acceptable values). With this convention, just after resizing the average bucket occupation becomes half that bound. Resizing by doubling keeps the average bucket occupation in a band of width *2.

What makes a good hash function?

  1. The hash value is fully determined by the data being hashed. If something else besides the input data is used to determine the hash, then the hash value is not as dependent upon the input data, thus allowing for a worse distribution of the hash values.
  2. The hash function uses all the input data. If the hash function doesn’t use all the input data, then slight variations to the input data would cause an inappropriate number of similar hash values resulting in too many collisions.
  3. The hash function “uniformly” distributes the data across the entire set of possible hash values. If the hash function does not uniformly distribute the data across the entire set of possible hash values, a large number of collisions will result, cutting down on the efficiency of the hash table. -> Clustering problem
  4. The hash function generates very different hash values for similar strings. In the real world, many data sets contain very similar data elements. We would like these data elements to still be distributable over a hash table.

How does JS run in browser?

JavaScript (JS) code runs in the JavaScript engine of a web browser. The JavaScript engine is responsible for executing the JS code and interpreting it to interact with the Document Object Model (DOM) and the browser’s rendering engine.

When a web page containing JS code is loaded in a browser, the browser’s rendering engine parses the HTML and CSS, and then encounters the JS code. The browser creates a JavaScript execution environment, which includes a global execution context, a stack, and a heap.

The JS engine then compiles the JS code into machine code and executes it. The compilation process involves several steps, including lexical analysis, syntax analysis, and code generation.

During the execution of the JS code, the JS engine creates a stack frame for each function call, which contains the function’s parameters, local variables, and other relevant data. The stack frame is pushed onto the stack, and when the function returns, the stack frame is popped off the stack.

As the JS code runs, it interacts with the DOM and the browser’s rendering engine, modifying the web page in response to user events or other interactions. The JS engine also manages memory allocation and garbage collection to ensure efficient use of system resources.

Overall, the JavaScript engine plays a critical role in the functioning of a web browser, allowing web developers to create dynamic and interactive web pages that respond to user actions and other events.

How do JavaScript work? Are JavaScript Single Threaded?

  • Yes. The runtime can only run code line by line and must finish executing before going to the next.
  • V8 -> Java script run time. The environment to run JavaScript.
  • What happens when things are slow? We have to wait for them one by one until they are done?
  • This is a problem. If we have this in a browser, then the browser appears “frozen” and cannot do anything.
  • The solution: Asynchronous Callbacks
  • How the stack looks like for above is that we have console.log(“Hi”) followed by console.log(“JSConfEU”). Then after a while, console.log(“There”) appears in the stack.
  • The runtime can only run code line by line and must finish executing before going to the next. So how does the above happen? How can we skip the function in the middle? The reason is that a browser is made of more than just the run time.

The web API when it is done, pushes the callback to the task queue.

The event loop will be in charge of looking at the stack and the task queue. If the stack is empty, it pushes the first thing in the task queue onto the stack.

The call back gets executed.

What is NodeJS?

  • It is a run time environment for executing JS code outside a browser.

How does multi-threading work?

(Important and useful article below)

API system rate limiting

  • Helps to prevent database from slow performance from spike of request from a single consumer. It is also important from a security point of view to counters Dos Attacks.
  • Rate Limiting is important where you want to ensure a good quality of service for every consumer.
  • There are two types of rate limiting: Backend Rate Limiting & Application Rate Limiting.

Backend Rate Limiting:

  • API Owners typically measure processing limits in Transaction Per Second (TPS) and impose a limit on it.

Application Rate Limiting:

  • API Owners enforce a limit on the number of requests a client can consume.
Can check from HTTPS Headers to find out about rate limit

Limiting calls made to Third Party APIs:

  • For example if your client call the Google Map API directly, there is not much you can do to limit it. If the rate limited API is accessed via some form of backend process, it is then easier to do rate limiting.

Implementation

(1) Request Queues: Sets the rate limit at two requests per second and places the rest in a request queue.

(2) Trottling: Set up a temporary state and allow the API to assess each request. When the throttle is triggered, a user may either be disconnected or simply have their bandwidth reduced.

(3) Algorithms

Find out more here: https://nordicapis.com/everything-you-need-to-know-about-api-rate-limiting/

The small number of data points needed to assess each request makes the sliding window algorithm an ideal choice for processing large amounts of requests while still being light and fast to run.

What is dependency injection and how does it affect maintainability?

In the code, the interaction between objects should be as minimal as possible. You should ask for precisely what you need and nothing more. If you need something, you should ask for it — have it “delivered” to you instead of trying to create it yourself.

This allows for having loosely coupled code.

If your classes are loosely coupled and follow the single responsibility principle — the natural result of using DI — then your code will be easier to maintain.

Simple, stand-alone classes are easier to fix than complicated, tightly coupled classes.

Thread Pooling

What is it?

A thread pool is a pool of threads that can be “reused” to execute tasks, so that each thread may execute more than one task. A thread pool is an alternative to creating a new thread for each task you need to execute.

Creating a new thread comes with a performance overhead compared to reusing a thread that is already created. That is why reusing an existing thread to execute a task can result in a higher total throughput than creating a new thread per task.

Additionally, using a thread pool can make it easier to control how many threads are active at a time. Each thread consumes a certain amount of computer resources, such as memory (RAM), so if you have too many threads active at the same time, the total amount of resources (e.g. RAM) that is consumed may cause the computer to slow down.

How a Thread Pool Works?

Thread pools are often used in multi threaded servers. Each connection arriving at the server via the network is wrapped as a task.

Instead of starting a new thread for every task to execute concurrently, the task can be passed to a thread pool. As soon as the pool has any idle threads the task is assigned to one of them and executed. Internally the tasks are inserted into a Blocking Queue which the threads in the pool are dequeuing from. When a new task is inserted into the queue one of the idle threads will dequeue it successfully and execute it. The rest of the idle threads in the pool will be blocked waiting to dequeue tasks.

ArrayBlockingQueue in Java

http://tutorials.jenkov.com/java-concurrency/thread-pools.html

How is it used in database connections?

It is primarily used in situations where multiple clients or applications need to access a database simultaneously.

The assigned thread establishes a connection with the database and performs the required operations on behalf of the client application, such as executing SQL queries or transactions.

After completing the database operations, the thread releases the database connection back to the connection pool. The thread remains idle and ready to handle the next client request.

  • Thread pooling allows better control over the number of concurrent database connections.
  • Connection pooling manages a pool of established database connections, which further enhances performance by reusing existing connections instead of establishing a new one for each request.

RESTful Web service

An API is an interface that determines how components of a piece of software interact with each other.

REST is an architecture style for designing network applications. It relies on a stateless, client-server protocol (HTTP).

What does REST stand for?

REST stands for Representational State Transfer.

“Primary characteristics of REST are being stateless and using GET to access resources. In a truly RESTful application, the server can restart between calls as data passes through it.”

“REST has many advantages. It’s easy to scale, flexible and portable and works independently from the client and server, which makes development less complex. ”

Independent of the programming platform.

Name some of the commonly used HTTP methods used in REST based architecture?

POST, GET, PUT, PATCH, DELETE. These correspond to create, read, update, and delete (or CRUD) operations, respectively.

200 OK

404 NOT FOUND

201 CREATED

405 METHOD NOT ALLOWED

Post

When creating a new resource, POST to the parent and the service takes care of associating the new resource with the parent, assigning an ID (new resource URI), etc.

POST is neither safe nor idempotent. It is therefore recommended for non-idempotent resource requests. Making two identical POST requests will most-likely result in two resources containing the same information. On successful creation, return HTTP status 201, returning a Location header with a link to the newly-created resource with the 201 HTTP status.

Get

The HTTP GET method is used to **read** (or retrieve) a representation of a resource. In the “happy” (or non-error) path, GET returns a representation in XML or JSON and an HTTP response code of 200 (OK). In an error case, it most often returns a 404 (NOT FOUND) or 400 (BAD REQUEST).

They are considered safe. That is, they can be called without risk of data modification or corruption

Put

PUT-ing to a known resource URI with the request body containing the newly-updated representation of the original resource.

However, PUT can also be used to create a resource in the case where the resource ID is chosen by the client instead of by the server. In other words, if the PUT is to a URI that contains the value of a non-existent resource ID.

PUT is not a safe operation, in that it modifies (or creates) state on the server, but it is idempotent. In other words, if you create or update a resource using PUT and then make that same call again, the resource is still there and still has the same state as it did with the first call.

Delete

On successful deletion, return HTTP status 200 (OK) along with a response body, perhaps the representation of the deleted item (often demands too much bandwidth), or a wrapped response (see Return Values below). Either that or return HTTP status 204 (NO CONTENT) with no response body. In other words, a 204 status with no body, or the JSEND-style response and HTTP status 200 are the recommended responses.

Is caching present?

Caching refers to storing the server response in the client itself, so that a client need not make a server request for the same resource again and again.

Usually, browsers treat all GET requests cacheable. POST requests are not cacheable by default but can be made cacheable if either an Expires header or a Cache-Control header with a directive, to explicitly allows caching, is added to the response. Responses to PUT and DELETE requests are not cacheable at all.

The Expires HTTP header specifies an absolute expiry time for a cached representation.

Cache-control determine whether a response is cacheable, and if so, by whom, and for how long e.g. max-age or s-maxage directives.

Difference between PUT and PATCH?

PUT is a method of modifying resource where the client sends data that updates the entire resource. It is used to set an entity’s information completely. PUT is similar to POST in that it can create resources, but it does so when there is a defined URI. PUT overwrites the entire entity if it already exists, and creates a new resource if it doesn’t exist.

Unlike PUT, PATCH applies a partial update to the resource.

This means that you are only required to send the data that you want to update, and it won’t affect or change anything else. So if you want to update the first name on a database, you will only be required to send the first parameter; the first name.

For example, when you want to change the first name of a person in a database, you need to send the entire resource when making a PUT request.

What is a resource in REST?

A resource is a building block of REST. A resource is a name for any piece of content in a RESTful piece of architecture. This includes HTML, text files, images, video and more

What is the disadvantages of REST?

(1) RESTful APIs are built on URIs referencing resources. URIs are convenient for http requests and caching, but they are a poor fit for resources that are not naturally organized or accessed in simple hierarchy.

For example, a request like, “Return all updated records from the last 3 hours containing the word cat” does not lend itself to being expressed as a path, so is likely to have to be implemented with some combination of URI path, query parameters, and perhaps request body.

(2) RESTful APIs typically rely on a few http methods (GET, POST, PUT, DELETE, perhaps PATCH), but many common client/server operations can only awkwardly be shoehorned into the standard methods. “Move expired documents to the archive folder” is an example of an application-specific custom verb that’s outside of typical http methods and CRUD operations.

https://www.quora.com/What-are-the-drawbacks-of-using-RESTful-APIs

Query Param, Path Param

Path params: Identify a specific resource to retrieve a single resource. Eg https://example.com/users/12345.

Query param: Used to filter and sort the data to fetch multiple resources. Eg, https://example.com/products?category=electronics&sort=price

Is HTTP same as REST?

No, they are not. HTTP stands for Hyper Text Transfer Protocol and is a way to transfer files. This protocol is used to link pages of hypertext in what we call the world-wide-web. However, there are other transfer protocols available, like FTP and gopher, yet they are less popular.

Representational State Transfer, or REST, is a set of constraints that ensure a scalable, fault-tolerant and easily extendible system. The world-wide-web is an example of such system (and the biggest example, one might say). REST by itself is not a new invention, but it’s the documentation on such systems like the world-wide-web.

One thing that confuses people, is that REST and HTTP seem to be hand-in-hand. After all, the world-wide-web itself runs on HTTP, and it makes sense, a RESTful API does the same. However, there is nothing in the REST constraints that makes the usage of HTTP as a transfer protocol mandatory. It’s perfectly possible to use other transfer protocols like SNMP, SMTP and others to use, and your API could still very well be a RESTful API.

In practice, most — if not all — RESTful APIs currently use HTTP as a transport layer, since the infrastructure, servers and client libraries for HTTP are widely available already

Note that there is also a big difference between a RESTful API and a HTTP API. A RESTful API adheres ALL the REST constraints set out in its “format” documentation (in the dissertation of Roy Fielding). A HTTP API is ANY API that makes use of HTTP as their transfer protocol. This means that even SOAP can be considered a HTTP API, as long as it will use HTTP for transport, but most HTTP APIs will make more and better use of the infrastructure and possibilities of HTTP. Most HTTP APIs can be very close to becoming a truly RESTful API. This can be defined by their Richardsons maturity level.

How does a browser loads a website?

Once your browser receives an HTML file, it needs to render the page and it has to go through a few steps before you see it displayed. These steps are called the critical rendering path in which your browser needs to:

  1. Process HTML markup and build the DOM tree.
  2. Process CSS markup and build the CSSOM tree.
  3. Combine the DOM and CSSOM into a render tree.
  4. Run the layout on the render tree to compute the geometry of each node.
  5. Paint the individual nodes to the screen.

There are a few things we can do to improve the time it takes to render a web page. Fewer embedded file requests, smaller files being requested, and reducing the number of render blocking resources will all improve performance, but they aren’t the only things.

What is the purpose of -> in Java?

(Parameters) -> { Body } where the -> separates parameters and lambda expression body.

Lambda Expression

A lambda expression is a short block of code which takes in parameters and returns a value. Lambda expressions are similar to methods, but they do not need a name and they can be implemented right in the body of a method.

Anonymous Function

An anonymous function is a function that was declared without any named identifier to refer to it. As such, an anonymous function is usually not accessible after its initial creation.

Normal function definition:

function hello() {
alert('Hello world');
}
hello();

Anonymous function definition:

var anon = function() {
alert('I am anonymous');
}
anon();

So, what is GitHub Flow?

Anything in the master branch is deployable.
To work on something new, create a descriptively named branch off of master (ie: new-oauth2-scopes)
Commit to that branch locally and regularly push your work to the same named branch on the server
When you need feedback or help, or you think the branch is ready for merging, open a pull request.
After someone else has reviewed and signed off on the feature, you can merge it into master.
Once it is merged and pushed to ‘master’, you can and should deploy immediately.

How to improve frontend loading?

  1. Minification
  2. Pre fetch
  3. Remove unnecessary images, JS
  4. CDN and Caching

Minification

Minification (also minimisation or minimization) is the process of removing all unnecessary characters from the source code of interpreted programming languages or markup languages without changing its functionality.

Database Locking

Database locking is a technique used to manage concurrent access to a database by multiple users or processes. Locking is used to ensure that only one user or process can modify a particular record or set of records in the database at a time, preventing conflicts and data inconsistencies.

There are two types of database locking: pessimistic locking and optimistic locking.

  1. Pessimistic Locking: Pessimistic locking involves acquiring a lock on a database record or set of records before modifying it. This ensures that no other user or process can modify the same record at the same time. Pessimistic locking is typically used in situations where conflicts are likely to occur, such as in banking applications or inventory management systems. When a user or process requests a lock on a record, the database management system (DBMS) will prevent other users or processes from modifying the record until the lock is released.
  2. Optimistic Locking: Optimistic locking involves assuming that conflicts are unlikely to occur and allowing multiple users or processes to modify the same record at the same time. However, before committing a change to the database, the DBMS checks to see if the record has been modified by another user or process since it was last read. If the record has been modified, the DBMS will abort the transaction and notify the user or process that the record has been modified by another user. Optimistic locking is typically used in situations where conflicts are rare and the cost of acquiring locks is high.

In practice, database locking is implemented using a variety of techniques, including row-level locking, table-level locking, and database-level locking. The exact technique used depends on the requirements of the application and the capabilities of the DBMS being used. When a lock is acquired, the DBMS typically adds a marker to the record or set of records being modified, indicating that they are locked and cannot be modified by other users or processes until the lock is released.

  1. Row-level Locking: Row-level locking is a technique in which a DBMS locks individual rows in a table, rather than the entire table. When a transaction requests a lock on a specific row, the DBMS will lock only that row, allowing other transactions to access other rows in the table. This technique is useful when only a few rows in a table are being accessed or modified by multiple transactions, as it allows for better concurrency and reduces the likelihood of conflicts.
  2. Table-level Locking: Table-level locking is a technique in which a DBMS locks the entire table, preventing any other transactions from accessing or modifying any part of the table until the lock is released. This technique is useful when multiple transactions need to modify the entire table at once, but it can lead to poor concurrency and increased wait times for transactions that only need to access a small portion of the table.
  3. Database-level Locking: Database-level locking is a technique in which a DBMS locks the entire database, preventing any other transactions from accessing or modifying any part of the database until the lock is released. This technique is typically used in situations where maintenance or administrative tasks need to be performed on the entire database, but it can lead to poor concurrency and increased wait times for transactions that only need to access a small portion of the database.

In general, row-level locking is the preferred technique for managing concurrent access to a database, as it allows for better concurrency and reduces the likelihood of conflicts. However, table-level and database-level locking may be necessary in certain situations where multiple transactions need to access or modify the entire table or database.

How does redis do cache eviction?

To prevent running out of memory, Redis uses cache eviction to remove data from memory when necessary. Redis provides several options for cache eviction, including:

  1. LRU (Least Recently Used): This is the default eviction policy in Redis. It removes the least recently used keys to make room for new data when the maximum memory limit is reached.
  2. LFU (Least Frequently Used): This eviction policy removes the least frequently used keys to make room for new data when the maximum memory limit is reached. This can be useful when certain keys are accessed more frequently than others.
  3. Random: This eviction policy removes a random key to make room for new data when the maximum memory limit is reached. This can be useful when it’s difficult to predict which keys will be accessed most frequently.
  4. TTL (Time to Live): This eviction policy removes keys that have expired based on their Time to Live (TTL) value. Keys can be set with an expiration time, after which they will automatically be deleted.
  5. Maxmemory: This eviction policy removes keys when the maximum memory limit is reached. This policy is useful when a hard limit on memory usage is required.

HTTPS/HTTP

HTTP (Hypertext Transfer Protocol) is a protocol used for communication between web servers and clients over the internet. It’s the foundation of the World Wide Web and allows the exchange of text, images, videos, and other content. HTTP is a stateless protocol, which means that each request and response is independent of previous requests and responses.

HTTPS (Hypertext Transfer Protocol Secure) is a secure version of HTTP. It provides encryption of data in transit between the web server and client to prevent eavesdropping, tampering, and other security threats. HTTPS uses SSL/TLS (Secure Sockets Layer/Transport Layer Security) protocols to establish a secure connection between the server and client.

You should use HTTPS whenever you need to transmit sensitive or private data over the internet. This includes credit card information, login credentials, personal information, and any other data that should be kept secure. Using HTTPS can protect your users’ data from interception and tampering by attackers.

HTTPS works by encrypting data in transit between the server and client using SSL/TLS protocols. When a user requests an HTTPS page, the server sends its SSL/TLS certificate to the client, which contains the server’s public key. The client then uses the public key to encrypt a random session key, which is used for encrypting and decrypting data during the session. This ensures that only the server with the private key can decrypt the data.

  • The client generates a random session key and encrypts it with the server’s public key. The server decrypts the session key with its private key and uses it for encrypting and decrypting data during the session.
  • The client and server exchange encrypted data using the session key. The data is decrypted only by the receiving party using the session key.

Once the SSL/TLS handshake is complete, the server and client can exchange data over a secure, encrypted connection. This provides confidentiality, integrity, and authentication of the data in transit, making it much more difficult for attackers to intercept or tamper with the data.

In summary, HTTPS is a secure version of HTTP that uses SSL/TLS protocols to provide encryption of data in transit between the server and client. HTTPS should be used whenever sensitive or private data is transmitted over the internet to protect against security threats.

Symmetric/Asymmetric encryption

Symmetric encryption uses a single secret key to both encrypt and decrypt data. This means that the same key is used to both scramble and unscramble the data.

Asymmetric encryption, also known as public-key cryptography, uses a pair of keys — a public key and a private key — to encrypt and decrypt data. The public key is shared freely and can be used by anyone to encrypt data, while the private key is kept secret and used to decrypt data. Asymmetric encryption allows for secure communication between two parties without needing to share a secret key in advance. Asymmetric encryption is slower than symmetric encryption and is often used for secure key exchange or digital signature verification.

Why does HTTPS use both symmetric and Asymetric encryption?

The initial use of asymmetric encryption provides a secure way for the client to establish a shared secret key with the server without the risk of the key being intercepted by a third party.

The symmetric encryption, used later on after the initial verification, allows for fast and efficient communication.

Explain how disconnection happens in TCP

The disconnect or termination of a TCP connection can occur in several ways, including:

Active close: In an active close, one of the devices initiates the termination of the connection by sending a FIN (finish) packet to the other device. The receiving device acknowledges the FIN with an ACK (acknowledgment) packet and sends its own FIN packet to complete the termination process.

Passive close: In a passive close, one of the devices waits for the other device to initiate the termination of the connection. This is often used when a server is handling multiple connections and doesn’t know when a client will be finished.

Abort: An abort occurs when one of the devices unexpectedly terminates the connection without going through the normal termination process. This can happen if there is a network failure, a software error, or a timeout.

Regardless of how the connection is terminated, TCP ensures that all data has been reliably delivered and acknowledged before the connection is closed. This ensures that both devices are aware of the state of the connection and can gracefully handle the termination process.

How does Kill command work?

The kill command in Unix-like operating systems sends a signal to a process or a group of processes specified by their Process IDs (PIDs). The signal is a software interrupt that is delivered to the process, requesting it to take some action, such as terminating or stopping.

By default, the kill command sends the SIGTERM signal, which asks the process to terminate gracefully. The process can catch this signal and perform some cleanup operations before exiting. If the process does not respond to the SIGTERM signal, or if a more forceful termination is required, the kill command can send other signals, such as SIGKILL, which immediately terminates the process without giving it a chance to clean up.

What’s the difference between HashMap and HashTable?

HashMap and HashTable are both data structures used to store key-value pairs. They have similar functionality but differ in a few important ways:

  1. Synchronization: HashTable is synchronized, meaning that it is thread-safe, while HashMap is not. This means that multiple threads can access and modify a HashMap object concurrently, but only one thread can access a HashTable object at a time.
  2. Null values: HashMap allows null values for both keys and values, while HashTable does not allow null keys or values.
  3. Iteration: HashMap provides an iterator that can iterate over the keys, values, or entries (key-value pairs) of the map, while HashTable only provides an Enumeration that can iterate over the keys.
  4. Performance: HashMap is generally faster than HashTable because it is not synchronized, but this can depend on the specific use case and the size of the map.

In summary, if you need thread-safety and can’t use null values, use HashTable. Otherwise, HashMap is generally the better choice.

Concurrent hash map is implemented using Hash Table.

  • It is also thread safe but it is generally faster than HashTable because of its partition approach to locking sections of the map rather than locking the entire map. This means that multiple threads can access different segments of the map concurrently, without blocking each other.
  • Additionally, ConcurrentHashMap allows null keys and values.
ConcurrentHashMap<String, Integer> concurrentHashMap = new ConcurrentHashMap<>();
concurrentHashMap.put("apple", 1);
concurrentHashMap.put("banana", 2);
concurrentHashMap.put("cherry", 3);

ConcurrentHAshMap is designed to handle high levels of concurrency more efficiently, while HashTable is a synchronized map that is simpler to use but may not perform as well in highly concurrent environments.

Differences between cassandra, cockroachdb, mysql, postgres?

Cassandra

Cassandra stores data using a distributed architecture that allows it to scale horizontally by adding more nodes to the cluster. Data is distributed across the nodes in the cluster using a partitioning scheme called consistent hashing. Each node in the cluster is responsible for a portion of the data, and this responsibility is determined by the consistent hashing algorithm.

Within each node, data is stored in a column-family model, which is similar to a table in a relational database. A column family consists of a set of rows, each identified by a unique key. Each row can have multiple columns, which are grouped into column families. The columns within a family are stored together, making it more efficient to read and write related data.

In Cassandra, data is organized into column families, which are essentially groups of related columns that are stored together on disk. Each column family has a name, and within each column family, the data is stored as a set of rows identified by a unique key. When data is stored in Cassandra, the columns within a family are physically stored together, and the data for each row is written as a single unit, known as a row key. This makes it more efficient to read and write related data, as all of the columns for a particular row are stored together, and can be retrieved with a single disk read operation. This is in contrast to the way that data is typically stored in a traditional relational database, where data is organized into tables, and each row of the table is stored as a separate unit on disk. In a relational database, when you query data from a table, the database may need to retrieve data from multiple different locations on disk, which can be slower and less efficient than retrieving data from a single location

| — — — — — — — — — — — -|
| Column Family 1 (CF1) |
| — — — — — — — — — — — -|
| Row Key 1 |
| — — — — — — — — — — — -|
| Column 1 (in CF1) |
| Column 2 (in CF1) |
| — — — — — — — — — — — -|

| — — — — — — — — — — — -|
| Column Family 2 (CF2) |
| — — — — — — — — — — — -|
| Row Key 1 |
| — — — — — — — — — — — -|
| Column 3 (in CF2) |
| Column 4 (in CF2) |
| — — — — — — — — — — — -|

In this example, the data for Row Key 1 is stored in two separate column families: Column Family 1 (CF1) and Column Family 2 (CF2).

Columns 1 and 2 are stored in CF1, and columns 3 and 4 are stored in CF2. Note that within each column family, the columns are stored together on disk, as described in my previous answer.

When retrieving data from Cassandra, you can specify which column families to query, and Cassandra will return the requested columns from the appropriate families.

Cassandra also supports the concept of secondary indexes, which allow you to query data by non-primary key columns. Secondary indexes are stored in a separate table, and they are updated asynchronously, so there may be a slight delay between updates to the primary data and updates to the secondary indexes.

  • By default, Cassandra allows you to query data only by the primary key, which is the row key that uniquely identifies each row in the column family.
  • However, in many cases, you may want to be able to query data based on a non-primary key column, and this is where secondary indexes come in.

CockroachDB

Distributed SQL Database. It stores data in a key-value store called RocksDB.

In CockroachDB, data is organized into ranges, which are contiguous, sorted key-value pairs. Each range is a portion of the overall key space and is replicated across multiple nodes in the cluster for high availability and fault tolerance.

CockroachDB uses a distributed consensus algorithm called Raft to ensure that all replicas of a range stay in sync. When a write operation is performed on a range, the Raft protocol is used to coordinate the write across all replicas, ensuring that each replica receives the update and that the replicas remain consistent with each other.

Data model:

  • Cassandra: a NoSQL database that uses a distributed key-value store data model. It is designed to handle large volumes of data with high write throughput and low latency.
  • CockroachDB: a distributed SQL database that supports ACID transactions and uses a key-value store data model. It is designed to scale horizontally and handle large volumes of data with strong consistency.
  • MySQL: a relational database that uses a table-based data model. It is designed to handle small to medium-sized data sets with moderate write throughput and strong consistency.
  • PostgreSQL: a relational database that supports advanced SQL features and uses a table-based data model. It is designed to handle small to large-sized data sets with high write throughput and strong consistency.

Scalability:

  • Cassandra: designed to scale horizontally across multiple nodes in a cluster. It can handle high volumes of data and concurrent users with low latency.
  • CockroachDB: designed to scale horizontally across multiple nodes in a cluster. It uses a distributed SQL engine to provide strong consistency and support for ACID transactions.
  • MySQL: designed to scale vertically by increasing the resources of a single node. It can also be scaled horizontally using sharding or replication, but this can be complex to set up and manage.
  • PostgreSQL: designed to scale vertically by increasing the resources of a single node. It can also be scaled horizontally using sharding or replication, but this requires some effort to set up and manage.

Consistency:

  • Cassandra: uses eventual consistency by default, which means that updates may not be immediately visible to all nodes in a cluster. It also supports tunable consistency levels for read and write operations.
  • CockroachDB: uses strong consistency by default, which means that updates are immediately visible to all nodes in a cluster. It also supports tunable consistency levels for read and write operations.
  • MySQL: supports strong consistency for read and write operations, but this depends on the storage engine used. Some storage engines, such as MyISAM, do not support transactions and can lead to data inconsistencies.
  • PostgreSQL: uses strong consistency by default for read and write operations.

SQL support:

  • Cassandra: supports a limited subset of SQL commands and syntax.
  • CockroachDB: supports a large subset of SQL commands and syntax, including support for transactions and joins.
  • MySQL: supports a large subset of SQL commands and syntax, including support for transactions and joins.
  • PostgreSQL: supports advanced SQL features, including support for transactions, joins, stored procedures, and user-defined functions.

Redis and Memcached

One of the main differences between Memcached and Redis is their data model. Memcached provides a simple key-value store, where data is stored as key-value pairs. Keys can be any string, and values can be any object, such as strings, numbers, or even complex data structures like arrays or objects. Redis, on the other hand, provides a more advanced data model that includes support for strings, lists, sets, hashes, and sorted sets. This allows Redis to provide more advanced data manipulation features, such as atomic operations on sets and sorted sets.

Another difference between Memcached and Redis is their persistence model. Memcached does not provide any built-in persistence features, and data is not persisted to disk. This means that if a Memcached server goes down or is restarted, all cached data is lost. Redis, on the other hand, provides various persistence options, including snapshots and AOF (append-only file) persistence, which can be used to store data on disk and prevent data loss in case of server failure.

Redis writes to disk for persistence, which ensures that the data stored in memory is not lost in case of a server failure or shutdown. When Redis is running, all data is stored in memory, which provides extremely fast read and write performance. However, if Redis were to experience a failure or if it were shut down for maintenance, all data stored in memory would be lost.

To prevent data loss, Redis provides two mechanisms for persisting data to disk: snapshotting and append-only file (AOF) persistence.

  • With snapshotting, Redis periodically takes a snapshot of the in-memory data and writes it to disk.
  • With AOF persistence, Redis writes every write operation to a log file, which can be used to recover the data in case of a failure.

Redis Cluster

Redis Cluster is a distributed implementation of Redis that provides high availability and scalability. Redis Cluster is composed of multiple nodes, each running a Redis server and storing a subset of the keyspace. Redis Cluster uses sharding to distribute the keys across the nodes, with each node responsible for a subset of the keys.

Redis Cluster provides high availability by using a master-slave replication model. Each node has a master and one or more slave nodes. The master node is responsible for handling write operations, while the slave nodes replicate the data from the master and handle read operations. Redis Cluster uses a quorum-based approach to ensure consistency, requiring a majority of nodes to agree on a write operation before it is considered committed.

When a master node fails, Redis Cluster promotes one of its slave nodes to be the new master.

Why would you ever use Memcached instead of a Redis cluster then?

  • Need strong consistent data (no lost writes on failover)
  • Want alternate replication patterns like leaderless
  • Prefer to use a coordination service for config/partition management instead of gossip protocol

How does load balancer work?

Load balancers typically operate at the application layer (layer 7) or the transport layer (layer 4) of the OSI model.

The basic operation of a load balancer involves receiving incoming traffic from clients, then forwarding it to a pool of servers or instances based on a set of predefined rules. These rules can be based on factors such as server availability, network latency, server load, or a combination of these and other metrics.

  1. Round Robin: This algorithm distributes traffic evenly across all available servers, with each server receiving an equal share of requests.
  2. Least Connections: This algorithm directs traffic to the server with the fewest active connections at the time of the request.
  3. IP Hash: This algorithm hashes the client’s IP address to determine which server to send the request to, ensuring that the same client is always sent to the same server.

Once the load balancer has selected a server to handle a request, it establishes a connection between the client and the selected server. The load balancer can then monitor the server’s health and availability, and if the server becomes overloaded or unavailable, the load balancer can redirect traffic to other servers in the pool to ensure that requests are handled without disruption.

SQL Injection

SQL injection is a type of cyber attack that exploits vulnerabilities in a website’s database system. It involves inserting malicious code into a website’s input fields, such as login forms or search boxes, to gain unauthorized access to sensitive data or execute unauthorized actions on the database.

In a SQL injection attack, the attacker typically sends a specially crafted SQL query to the website’s database server, which the server interprets as a legitimate query. The attacker can then use this query to access, modify, or delete sensitive data in the database, or to execute unauthorized commands on the server.

SQL injection attacks can be particularly dangerous because they can allow attackers to bypass authentication mechanisms, steal sensitive data, or even take control of an entire website or server. To prevent SQL injection attacks, it is important to implement proper input validation and parameterized queries, which help to prevent malicious code from being executed on the database.

Eg, prepared statements, validating input data, and using input sanitization techniques.

Prepared statements work by separating the SQL statement from the input data. Instead of directly inserting the user-supplied data into the SQL statement, placeholders are used to represent the data.

  1. Security: By separating the SQL statement from the input data, prepared statements prevent SQL injection attacks. As it ensures that user input is treated as data rather than executable code.
  2. Performance: Prepared statements can be cached by the database server, allowing them to be reused with different input data. This can lead to faster query execution times.
  3. Maintainability: Prepared statements make it easier to write and maintain complex SQL queries, as the input data is separated from the SQL statement.

Deadlocks

A deadlock is a situation in computer science where two or more processes are blocked, waiting for each other to release a resource that they need to proceed. This creates a standstill where none of the processes can continue, resulting in a system-wide lock-up or freeze.

Deadlocks commonly occur in multi-threaded or distributed systems that use shared resources, such as databases, file systems, or network connections. When multiple processes or threads compete for the same resource, they may end up waiting indefinitely for each other to release it, causing a deadlock.

Suppose you have two processes, Process A and Process B, that are both trying to access two shared resources, Resource X and Resource Y.

Process A acquires Resource X and then attempts to access Resource Y.

Meanwhile, Process B acquires Resource Y and then attempts to access Resource X.

Both processes are now waiting for the other to release the resource they need to proceed. This creates a deadlock, as neither process can continue without the other releasing the resource they need.

To prevent deadlocks, various techniques can be employed,

  1. Deadlock prevention: This involves designing the system in such a way that deadlocks are avoided altogether. One way to do this is to impose a strict ordering of resources, so that each process requests and acquires resources in a predetermined order. For example, if we have two resources X and Y, we could enforce a rule that states that every process that needs to access both resources must request and acquire resource X before resource Y. This ensures that there is no possibility of two processes being blocked while waiting for each other to release a resource. We can also limit the number of resources that a process can request. To ensure resources are not hoarded by 1 process.
  2. Deadlock detection and recovery: This involves detecting when a deadlock has occurred and taking steps to resolve it. Deadlock detection algorithms periodically check the status of the system to determine if a deadlock has occurred. Once a deadlock is detected, recovery actions can be taken to break the deadlock, such as releasing resources or terminating one of the processes.
  3. Resource allocation and deallocation: This involves releasing one or more resources to allow the deadlock to be broken. This can be done by releasing all resources held by one of the processes involved in the deadlock or by pre-empting one of the processes and forcing it to release its resources.
  4. Timeouts: This involves setting a time limit for a process to acquire a resource. If the process is unable to acquire the resource within the specified time, it is forced to release any resources it is holding and re-attempt its request later.

Multi-threading

Multithreading is the process of executing multiple threads concurrently within a single process. A thread is a unit of execution within a program, which runs independently of other threads in the same process.

In a single-threaded program, there is only one thread of execution, and the program executes one instruction at a time. Multithreading allows for multiple threads to execute simultaneously, which can increase the performance of a program by utilizing the available resources more efficiently.

Each thread has its own set of registers, stack, and program counter, but they share the same memory and other system resources of the process. This allows for communication between threads and the sharing of data.

Multithreading can be used for a variety of purposes, such as improving the responsiveness of a user interface, processing multiple requests concurrently, or performing background tasks while the main thread continues to execute.

However, multithreading can also introduce some challenges, such as synchronization issues, race conditions, and deadlocks, where two or more threads are blocked waiting for each other to release resources. These issues can lead to bugs and program crashes if not handled properly.

Overall, multithreading is a powerful tool for improving the performance and efficiency of a program, but it requires careful design and implementation to ensure that threads are synchronized and resources are shared correctly.

Refer to https://liverungrow.medium.com/summary-article-of-what-i-have-learned-in-my-nus-modules-5a6be6d2b733 -> Search for “threads”

How does service mesh work?

A service mesh is a dedicated infrastructure layer for managing service-to-service communication within a microservices architecture.

It consists of a set of network proxies, or sidecars, deployed alongside each service instance in the cluster. These sidecars intercept all incoming and outgoing traffic and provide a range of features, such as load balancing, service discovery, traffic routing, security, and observability.

The basic architecture of a service mesh is based on the concept of a data plane and a control plane. The data plane consists of the sidecar proxies that intercept and manage the traffic between services. The control plane, on the other hand, provides a central management layer for configuring and controlling the behavior of the proxies.

  • The control plane is typically a separate software component that is responsible for managing the configuration and state of the sidecar proxies. It provides an API for configuring rules and policies, such as traffic routing, load balancing, security, and observability. The sidecar proxies consult the control plane to determine how to handle incoming and outgoing traffic.

When a service wants to communicate with another service, it sends a request to its local sidecar proxy instead of directly to the destination service. The sidecar proxy intercepts the request and performs a series of actions to ensure that the request is delivered to the appropriate destination service.

  1. The service instance sends a request to its local sidecar proxy.
  2. The sidecar proxy intercepts the request and consults the rules and policies that have been configured in the control plane for that particular service.
  3. The sidecar proxy determines the appropriate destination service for the request based on the rules and policies, such as load balancing or service discovery.
  4. The sidecar proxy then forwards the request to the destination service’s sidecar proxy, which in turn forwards it to the destination service instance.
  5. The destination service instance processes the request and sends a response back to its sidecar proxy.
  6. The destination sidecar proxy then forwards the response back to the source sidecar proxy.
  7. Finally, the source sidecar proxy delivers the response back to the requesting service instance.

Not all services in a microservices architecture necessarily need to be part of the service mesh. Services that are not part of the service mesh can still communicate with services that are part of the mesh, but they will not benefit from the advanced traffic management features provided by the service mesh.

How is this different from the usual microservice discovery?

Traditional microservice architectures often use a service discovery mechanism to locate and communicate with other services. Service discovery typically involves a central registry or a distributed database that maintains a list of all available services and their endpoints.

However, service discovery alone does not provide any features for managing the traffic between services. In a typical microservice architecture, each service instance is responsible for performing its own load balancing, health checking, and retries, which can lead to inconsistent behavior and increased complexity.

A service mesh, on the other hand, provides a dedicated infrastructure layer for managing service-to-service communication. By deploying a sidecar proxy alongside each service instance, a service mesh can provide advanced traffic management features, such as load balancing, service discovery, traffic routing, security, and observability.

In a service mesh, the sidecar proxy intercepts all incoming and outgoing traffic and manages the communication between services based on the rules and policies defined in the control plane. This allows for centralized management of traffic routing and provides a consistent way to manage advanced features like A/B testing, canary releases, and circuit breaking.

Overall, while service discovery is an important part of a microservices architecture, it only provides a way to locate services, whereas a service mesh provides a dedicated infrastructure layer for managing service-to-service communication and enables advanced traffic management features.

Does HTTPS encrypt GET URL params

  • Yes. But don’t encrypt the domain cus the domain is needed for routing.

In HTTPS, only the contents of the HTTP requests and responses are encrypted, not the domain name or the server IP address. When a client makes an HTTPS request, the first step is to establish a secure connection with the server using a process called the SSL/TLS handshake.

During the SSL/TLS handshake, the client and server exchange digital certificates to verify each other’s identities and negotiate a shared encryption key to use for the secure communication. The digital certificate includes information about the domain name of the server, which is used by the client to verify that it is communicating with the intended server and not an imposter.

Once the SSL/TLS handshake is complete and the secure connection is established, the client can send the HTTP request to the server. The HTTP request includes the domain name and the requested resource path, which are not encrypted but are sent over the secure connection.

At this point, the server can use the domain name to determine which service or application should handle the request, just as it would with an unencrypted HTTP request. The server can also use other routing mechanisms, such as load balancing or routing rules configured in a service mesh, to determine how to handle the request.

Overall, HTTPS encryption does not affect the routing of requests based on the domain name or the requested resource path, as this information is sent in plaintext within the encrypted HTTPS request.

Downsides and good of having multiple indexes?

The main advantage of having multiple indexes is that it can improve the performance of different types of queries, especially for complex applications that require a variety of query types.

However, there are some downsides to having multiple indexes in MongoDB, including:

  1. Increased storage space: Each index requires additional storage space, which can add up quickly for large collections with many indexes. This can increase the cost of running the database and may require more hardware resources.
  2. Increased write overhead: Each index needs to be updated whenever a document is inserted, updated, or deleted, which can slow down write performance. This can become a problem when a collection has a high write throughput and many indexes.
  3. Index selection overhead: When executing a query, MongoDB needs to determine which index is best suited for the query based on the query predicate, sort order, and other factors. If there are many indexes, this can increase the overhead of index selection and slow down query performance.

Microservices? Good and bad?

Good:

  1. Good separation of responsibilities
  2. Easier maintainability. Push out code faster.
  3. Scalability. Some services might have more requests to handle…
  4. Technology heterogeneity. Allow different services to be written using different languages.

Bad:

  1. Cannot reuse code?
  2. Need to tests across different services..might be harder.

How does DNS work?

Domain Name Service: Translate between host name (eg, www.comp.nus.edu.sg) and IP address.

The DNS stores Resource records in distributed databases implemented in a hierarchy of many name servers.

At the top, there are root servers located at different parts of the world. When a request is first made, the root server receives it and redirect user to Top level domain server. These servers take charge of .com .edu or country level domain names. Then, they redirect to Authoritative servers which is the organisation’s own server.

  • Recursive
  • Iterative

Local DNS (default name server) Server -> Does not belong to hierarchy. Each ISP has a local DNS server. It retrieves name-to-address translation from local cache. Cache entries will expire after Time to Live(TTL) countdown.

1. After hitting the URL, the browser cache is checked. As browser maintains its DNS records for some amount of time for the websites you have visited earlier. Hence, firstly, DNS query runs here to find the IP address associated with the domain name.

2. The second place where DNS query runs in OS cache followed by router cache.

3. If in the above steps, a DNS query does not get resolved, then it takes the help of resolver server. Resolver server is nothing but your ISP (Internet service provider). The query is sent to ISP where DNS query runs in ISP cache.

4. If in 3rd steps as well, no results found, then request sends to top or root server of the DNS hierarchy. There it never happens that it says no results found, but actually it tells, from where this information you can get. If you are searching IP address of the top level domain (.com,.net,.Gov,. org). It tells the resolver server to search TLD server (Top level domain).

5. Now, resolver asks TLD server to give IP address of our domain name. TLD stores address information of domain name. It tells the resolver to ask it to Authoritative Name server.

6. The authoritative name server is responsible for knowing everything about the domain name. Finally, resolver (ISP) gets the IP address associated with the domain name and sends it back to the browser.

7. Once the IP address of the computer (where your website information is there) is found, it initiates connection with it. To communicate over the network, internet protocol is followed. TCP/IP is most common protocol. A connection is built between two using a process called ‘TCP 3-way handshake’. Let’s understand the process in brief:

1. A client computer sends a SYN message means, whether second computer is open for new connection or not.

2. Then another computer, if open for new connection, it sends acknowledge message with SYN message as well.

3. After this, first computer receives its message and acknowledge by sending an ACK message.

Secondary index

Disadvantages

With DML operations like DELETE / INSERT , the secondary index also needs to be updated so that the copy of the primary key column can be deleted / inserted. In such cases, the existence of lots of secondary indexes can create issues.

Also, if a primary key is very large like a URL, since secondary indexes contain a copy of the primary key column value, it can be inefficient in terms of storage. More secondary keys means a greater number of duplicate copies of the primary key column value, so more storage in case of a large primary key. Also the primary key itself stores the keys, so the combined effect on storage will be very high.

https://www.freecodecamp.org/news/database-indexing-at-a-glance-bb50809d48bd/#:~:text=How%20does%20composite%20index%20work,order%20using%20a%20B%2B%20Tree.

What is composite index?

The columns used in composite indices are concatenated together, and those concatenated keys are stored in sorted order using a B+ Tree.

When do you need it?

  • Analyze your queries first according to your use cases. If you see certain fields are appearing together in many queries, you may consider creating a composite index.
  • If you are creating an index in col1 & a composite index in (col1, col2), then only the composite index should be fine. col1 alone can be served by the composite index itself since it’s a left side prefix of the index.
  • Consider cardinality. If columns used in the composite index end up having high cardinality together, they are good candidate for the composite index.

For example, if you have a table of customers and you often search for customers based on their last name and their city, you might create a composite index on the “last name” and “city” columns.

What is Kalfka?

Kafka is a distributed streaming platform that is designed to handle real-time data feeds. It is based on the publish-subscribe messaging pattern and uses a distributed architecture to ensure scalability, fault tolerance, and high throughput.

In Kafka, data is organized into topics. A topic is a category or feed name to which messages are published. Producers are responsible for producing messages and publishing them to a Kafka topic. Consumers subscribe to a topic and consume messages from it.

Kafka brokers are the nodes in the Kafka cluster that are responsible for storing and processing messages. Each broker can handle multiple topics and partitions. A partition is a unit of parallelism that represents a portion of the data in a topic.

When a message is published to a topic, it is assigned a unique offset within the partition. Each consumer maintains its own offset, which tracks the last message it consumed. This allows multiple consumers to read from the same partition without interfering with each other.

Kafka also provides features like data retention, replication, and fault tolerance. It stores messages for a configurable period of time or until a certain amount of data has accumulated. It replicates messages across multiple brokers for fault tolerance and high availability.

Overall, Kafka is designed to handle large volumes of data in a highly scalable and fault-tolerant manner, making it a popular choice for building real-time data pipelines and streaming applications.

What’s the difference between MySQL and PostgreSQL?

Sure, here are some additional differences between MySQL and PostgreSQL:

  1. Data types: MySQL and PostgreSQL have some differences in their supported data types. For example, PostgreSQL supports more data types than MySQL, such as arrays, hstore, and geometric types. Additionally, PostgreSQL has a built-in JSON data type while MySQL does not.
  2. Transactions and concurrency: Both databases support transactions and concurrency, but PostgreSQL’s implementation is generally considered more robust and feature-rich. PostgreSQL supports a wider range of transaction isolation levels and has a more advanced locking mechanism that allows for better concurrency.
  3. Extensibility: PostgreSQL is designed to be highly extensible and supports a rich set of user-defined functions, operators, and custom data types. This makes it well-suited for complex data models and specialized applications. MySQL, on the other hand, has a simpler architecture and is generally easier to set up and use for simpler applications.
  4. Performance: MySQL is known for its performance and scalability, especially for read-heavy workloads. However, PostgreSQL is generally considered to be more suitable for complex queries and write-heavy workloads.
  5. Licensing: MySQL is owned by Oracle and is available under the GPL or a commercial license, while PostgreSQL is released under the more permissive PostgreSQL License, which allows for more flexible use and distribution.

Use MySQL when:

  1. Your application requires high performance for simple transactions.
  2. You need a database that is easy to set up and use.
  3. Your application has simple, predictable, and consistent data relationships.
  4. You need high availability and scalability, particularly for read-intensive workloads.
  5. You need to integrate with other widely used technologies, such as PHP.

Use PostgreSQL when:

  1. Your application requires complex queries and/or data analysis.
  2. You need a database that supports advanced data types, such as arrays and JSON.
  3. Your application has complex data relationships and/or requires advanced transactional features.
  4. You need advanced security and data protection features, such as row-level security and full-text search.
  5. You need to integrate with other widely used technologies, such as Python.

What’s the difference between Thread and Process?

  1. Threads exist within a process and share the same memory space, while processes have their own memory space and do not share data unless explicitly communicated.
  2. Creating a thread is generally faster and less resource-intensive than creating a process.
  3. Switching between threads is generally faster than switching between processes.
  4. Because threads share memory, it is possible for multiple threads to access and modify the same data concurrently, which can lead to synchronization issues such as race conditions and deadlocks. Processes do not have this issue because they have their own memory space.
  5. Because processes do not share memory, communication between processes must be done through inter-process communication (IPC), which can be more complex and slower than communication between threads.

Out Of Memory

What is, how to prevent, what causes it?

It happens when a program tries to use more memory than is available in the system. It usually occurs in Java or other managed memory languages like Python, Ruby…

Reasons that can cause OOM error

  1. When a program creates too many objects that are not properly garbage collected.
  2. When a program loads large data into memory, such as a large file or an image.
  3. Memory leak can also cause an OOM error when a program continuously allocate memory without releasing it.

To prevent OOM errors, it’s important to carefully manage memory usage in your programs. This includes properly closing database connections, releasing memory allocated for unused objects, and avoiding the use of large data structures when possible. Using tools like profiling can help you identify memory leaks and areas of your code that are causing high memory usage.

In Java, you can also increase the maximum heap size using the -Xmx flag. This can help prevent OOM errors caused by running out of memory due to large data structures or other memory-intensive operations.

When receiving request, how do we know if it is from a logged in user?

(Session — Cookie)

To know if a request is from a logged-in user, you need to implement some form of user authentication in your application. The authentication process verifies the identity of the user who is making the request and confirms that they are authorized to access the requested resource.

One common approach to user authentication is to use session management. When a user logs in, a session is created for them on the server and a unique session ID is assigned to that session. The session ID is then sent to the user’s browser as a cookie or as a URL parameter. The browser then includes the session ID with each subsequent request it makes to the server, allowing the server to identify the user and their session.

In a web application, the server typically uses a combination of session timeouts and session invalidation mechanisms to determine when to remove a session when a user exits the application.

  1. Session Timeout: The server usually sets a timeout for each session, specifying the period of inactivity after which the session is considered expired. This timeout is typically defined in the server configuration or through a session management mechanism. When a user is inactive for a duration exceeding the session timeout, the server automatically invalidates the session and removes it.
  2. Session Invalidation: In addition to the session timeout, the server may also provide explicit ways to invalidate a session. For example, when a user explicitly logs out of the application or performs a specific action that requires the session to be terminated, the server can invalidate the session immediately, removing it from memory.

(Token)

Another approach is to use token-based authentication. In this approach, when a user logs in, a JSON Web Token (JWT) is generated and sent back to the client. The client includes the JWT in the Authorization header of each request it makes to the server, and the server verifies the JWT to authenticate the user.

By implementing user authentication in your application, you can ensure that only authorized users can access sensitive resources, and you can also track user activity and customize the user experience based on their identity.

What is man in the middle attack?

A man-in-the-middle (MITM) attack is a type of cyber attack where a hacker intercepts communication between two parties in order to eavesdrop, steal sensitive information, or impersonate one of the parties involved. The attacker secretly intercepts and possibly alters the communication between the two parties in a way that allows them to steal data or gain access to the system.

One common example of a MITM attack is when a hacker sets up a fake public Wi-Fi network in a public place, such as a coffee shop or airport, and waits for unsuspecting victims to connect to it. Once connected, the attacker can intercept and record all the data that passes between the victim and the internet, including sensitive login credentials or financial information.

To prevent MITM attacks, it is important to use secure communication protocols such as HTTPS, SSL, or TLS that encrypt data to prevent eavesdropping. It is also important to be cautious when connecting to public Wi-Fi networks, as they may be vulnerable to MITM attacks.

What is spring? Explain dependency injection?

Dependency injection is a design pattern that allows objects to be loosely coupled by injecting their dependencies rather than creating them directly. This approach enhances code reusability, testability, and modularity.

In Spring, objects are typically managed as beans, which are Java objects that are instantiated, assembled, and managed by the Spring IoC (Inversion of Control) container. The Spring container is responsible for creating and managing instances of beans and wiring them together based on their defined dependencies.

By default, Spring beans are not necessarily singletons. The scope of a bean determines the lifecycle and visibility of its instances. The default scope in Spring is singleton, which means that only one instance of a bean is created within the container and shared throughout the application context.

However, Spring supports different bean scopes, including singleton, prototype, request, session, and more. You can configure the scope of a bean according to your specific requirements. For example, if you declare a bean with the prototype scope, a new instance will be created whenever the bean is requested.

It’s important to note that the singleton scope in Spring does not guarantee thread-safety. If a singleton bean contains mutable state, proper synchronisation mechanisms should be used to ensure thread-safety.

@Scope("singleton")

@Scope("prototype") // are created every time they are requested from the container

@Scope("request") // Beans with request scope are created once per HTTP request in a web application. They are destroyed at the end of the request.

@Scope("session") // created once per user session in a web application and are destroyed when the session expires.
  1. Prototype Scope:
  • Stateful Objects: If you have stateful objects that should have a unique instance each time they are requested, you can use the prototype scope. This is useful when you want to avoid sharing mutable state across different components or when you need a new instance for each interaction.
  • Heavyweight Objects: If your beans are resource-intensive and creating a new instance for each request is not a performance concern, using the prototype scope can help manage resource usage. For example, if a bean holds a large cache or connection pool, creating a new instance for each request ensures that the resources are fresh and not shared.

2. Request Scope:

  • Web Applications: In a web application, you might have components that need to be scoped to a specific HTTP request. For example, you might have a bean that holds user-specific data or performs request-specific processing. In such cases, using the request scope ensures that a new instance is created for each incoming request and is available only within that request.
  • Multi-threading: If you’re developing a multi-threaded web application and need to isolate objects per request to avoid thread-safety issues, request scope can be useful. Each thread processing a separate request will have its own instance of request-scoped beans.

3. Session Scope:

  • User-specific Data: In a web application, you may have components that store user-specific data during a user session, such as user preferences or shopping cart information. By using the session scope, you can ensure that each user has a separate instance of the bean associated with their session.
  • Conversation State: In some cases, you might need to maintain conversational state across multiple HTTP requests within a user session. For example, if you’re implementing a multi-step wizard or a multi-page form, you can use the session scope to retain the state of the conversation across requests.

How are images rendered on a webpage?

  1. HTML <img> tag: Explain that to render an image on a web page, you can use the HTML <img> tag. This tag is a self-closing tag and requires the src attribute to specify the path or URL of the image file. Mention that it's common practice to include the alt attribute to provide alternative text for the image.
  2. File location and accessibility: Emphasize the importance of having the image file accessible by the web server. Discuss how the image file should be placed in a location that the web server can access, either on the local server or a remote server reachable via URL.
  3. HTML structure: Mention that the HTML document should have the proper structure with the <img> tag placed within the <body> tags. Highlight that the rest of the HTML document, including the <head> section, is essential for defining the structure and behavior of the webpage but is not directly related to rendering the image.
  4. Styling with CSS (optional): If you have knowledge of CSS, you can mention that additional styling can be applied to the image using CSS properties such as width, height, borders, and more. Clarify that CSS can be added inline or through an external CSS file, allowing for greater control over the image’s appearance.
  5. Testing and viewing the image: Explain that after saving the HTML file, it can be opened in a web browser to view the rendered image. Emphasize the importance of testing the webpage on different devices and browsers to ensure the image displays correctly and is responsive.

What is CDN?

CDN stands for Content Delivery Network. It is a network of servers located in various geographical locations worldwide. The primary purpose of a CDN is to deliver web content, such as images, CSS files, JavaScript files, videos, and other static or dynamic assets, to end-users with high performance and availability.

When a user requests content from a website that utilizes a CDN, the CDN server closest to the user’s location delivers the content, rather than the content being served directly from the website’s origin server. This proximity helps reduce latency and improves the overall browsing experience for the user.

General knowledge about openonload.

  • OpenOnload aims to improve network performance and reduce latency by offloading certain networking tasks (such as TCP/IP stack processing and interrupt handling) from the operating system to dedicated hardware.
  • Reduces load on CPU
  • TCP/IP stack processing refers to the set of protocols and algorithms that are used for communication over a TCP/IP network.
  • TCP/IP stack processing involves handling the different layers of the protocol stack to ensure reliable and efficient communication between network devices.
  • Layers (Network layer — ip, Transport Layer — UDP, TCP, Internet Layer — ICMP, Application layer — HTTP, SSH)

OS

What is a bootstrap program in OS?

  • Program that is executed to initialise the OS whenever the computer system starts up.

What is demand Paging?

  • Attempts to access a page
  • If page is in resident (in memory), proceed as normal
  • Otherwise, trigger page fault and check if memory reference is a valid reference on secondary memory and retrieve it.

Diff between main memory and secondary memory?

  • Main memory: RAM.
  • Secondary memory: Storage devices, external memory.

What is virtual memory?

  • It is a memory management technique feature of OS that creates the illusion to users of a very large (main) memory.
  • It enables us to increase the use of physical memory by using a disk and also allows us to have memory protection.

Thread

Thread is a path of execution. Each thread has its own: Program counter, Registers, Stack, and State.

What is the difference between paging and segmentation?

Paging: It is generally a memory management technique that allows the OS to retrieve processes from secondary storage into main memory. It is a non-contiguous allocation technique that divides each process in the form of pages.

Segmentation: It is generally a memory management technique that divides processes into modules and parts of different sizes. These parts and modules are known as segments that can be allocated to process.

  • Size of segments are not fixed. Size of pages are fixed.

What is thrashing in OS?

  • It is generally a situation where the CPU spends more time swapping or paging activities rather than its execution. It occurs when the process does not have enough pages due to which the page-fault rate is increased.

Explain zombie process?

A process that is terminated or completed but the whole process control block is not cleaned up from the main memory because it still has an entry in the process table to report to its parent process. The parent process fails to collect its exit status. These zombie processes still have an entry in the process table, consuming system resources. It does not consume any of the resources and is dead, but it still exists. It also shows that resources are held by process and are not free. To prevent zombie processes, you can follow these steps:

  • The wait() or waitpid() system calls allow the parent process to explicitly wait for child processes to terminate and collect their exit status. These calls block the parent process until a child process terminates
  • Once the parent process has collected the exit status of a child process, it should remove the corresponding entry from the process table

What is starvation and aging in OS?

Starvation: It is generally a problem that usually occurs when a process has not been able to get the required resources it needs for progress with its execution for a long period of time. Low priority.

Aging: To overcome starvation problem, simply increase the priority of processes that have been waiting for a long time.

Network internet layers

Physical Layer:

The physical layer is the lowest layer in the protocol suite.

It deals with the physical transmission of data over the network medium, such as copper wires, fiber optic cables, or wireless signals.

It defines electrical, mechanical, and procedural specifications for transmitting raw bits.

Data Link Layer:

The data link layer provides reliable data transfer between directly connected devices on the same network segment.

It handles framing of data into frames, error detection and correction, and flow control.

Ethernet, Wi-Fi, and Point-to-Point Protocol (PPP) are examples of data link layer protocols.

Wi-Fi, short for Wireless Fidelity, is a wireless networking technology that allows devices to connect to the internet or communicate with each other without the need for physical cables. It provides wireless access to local area networks (LANs) and the internet.

Wi-Fi operates using radio waves to transmit and receive data between devices.

Network Layer (Internet Protocol — IP):

The network layer enables communication between devices across different networks.

It assigns unique IP addresses to devices and routes data packets from the source to the destination.

The Internet Protocol (IP) is the primary network layer protocol.

IP is responsible for addressing, fragmentation and reassembly of packets, and logical routing.

Transport Layer (Transmission Control Protocol — TCP and User Datagram Protocol — UDP):

The transport layer provides end-to-end communication services between applications running on different devices.

TCP offers reliable, connection-oriented communication. It ensures data delivery, ordered transmission, and congestion control.

UDP provides connectionless, unreliable communication. It is faster but offers no guarantees for delivery or ordering.

The transport layer uses port numbers to identify different application services.

Application Layer:

The application layer is the highest layer in the protocol suite and is closest to the end user.

It provides protocols and services that directly support user applications.

Examples include HTTP (Hypertext Transfer Protocol) for web browsing, FTP (File Transfer Protocol), SMTP (Simple Mail Transfer Protocol) for email, and DNS (Domain Name System) for domain name resolution.

These layers work together to facilitate communication and data transfer across networks. Each layer performs specific functions and interacts with adjacent layers to ensure data integrity, addressing, routing, and application-level services.

How does wifi work?

Wi-Fi works by using radio waves to transmit and receive data between devices without the need for physical cables.

A wireless router or access point broadcasts a Wi-Fi signal, which devices equipped with wireless network adapters can detect and connect to. The devices authenticate with the network using a password or passphrase and are assigned an IP address.

Data Transmission: Once connected, devices can transmit and receive data over the Wi-Fi network.

The router acts as a bridge between the Wi-Fi network and the internet, allowing devices to access online resources.

MAC address Vs IP address

A MAC address is a unique identifier assigned to the network interface card (NIC) of a device. It is a hardware address that is burned into the NIC during manufacturing and is typically represented as a series of hexadecimal numbers separated by colons or hyphens. MAC addresses operate at the data link layer (Layer 2) of the OSI model.

  • Assigned during manufacturing
  • Used for devices to communicate with a LAN. Help direct data packets to the correct device on the local network.
  • Not for device communication over different network.

An IP address is a numerical label assigned to each device connected to a network. It is used to identify and locate devices in a network and enables communication across different networks, including the internet. IP addresses operate at the network layer.

  • IP addresses are assigned to devices by a network administrator or obtained dynamically through protocols like DHCP (Dynamic Host Configuration Protocol).
  • They consist of a series of numbers separated by periods (IPv4–32 bits) or groups of hexadecimal numbers separated by colons (IPv6–128 bits).

DNS DHCP

  • both uses UDP as the underlying transport protocol

How to improve upon UDP?

Go-Back-N (GBN):

  • Go-Back-N is a sliding window-based protocol that provides a simple and efficient approach for reliable data transfer.
  • The sender divides the data into fixed-size packets and assigns a sequence number to each packet before transmitting them.
  • The sender maintains a window that represents the range of packets that can be sent without waiting for acknowledgments.
  • Upon sending the window of packets, the sender starts a timer.
  • The receiver acknowledges the receipt of packets by sending cumulative acknowledgments, indicating the highest sequence number received successfully.
  • If the sender receives an acknowledgment for the entire window, it advances the window, including the next set of packets to be sent.
  • If the sender does not receive an acknowledgment within the timeout period, it retransmits all the packets within the window.

Go-Back-N is a reliable protocol, but it can lead to unnecessary retransmissions when a single packet is lost. It assumes that the receiver has enough buffer space to store and process packets in the correct order.

Selective Repeat:

  • Selective Repeat is another sliding window-based protocol that offers more efficient retransmission in the presence of packet loss or corruption.
  • Similar to Go-Back-N, the sender assigns sequence numbers to packets and maintains a window for transmission.
  • The receiver sends individual acknowledgments for each successfully received packet, indicating its sequence number.
  • The sender maintains a buffer to store the sent packets until they are acknowledged.
  • Unlike Go-Back-N, the receiver does not discard out-of-order packets. Instead, it buffers the packets until they can be delivered to the upper layer in the correct order.
  • Upon receiving an acknowledgment, the sender updates its window, including new packets to be sent.
  • In case of packet loss or corruption, the receiver can request retransmission of specific packets by sending a negative acknowledgment (NACK) or using a separate mechanism such as a selective reject (SREJ) message.

Selective Repeat is more efficient than Go-Back-N in terms of retransmissions, as it allows for individual retransmission of lost or corrupted packets instead of retransmitting an entire window. However, it requires additional complexity in terms of packet buffering and handling out-of-order delivery.

Key Differences of Selective Repeat and TCP:

  1. Layer: Selective Repeat operates at the data link layer, while TCP operates at the transport layer.
  2. Acknowledgment Granularity: Selective Repeat uses individual acknowledgments for each frame, while TCP uses cumulative acknowledgments for multiple segments.
  3. Retransmissions: In Selective Repeat, retransmissions are specific to missing frames, whereas TCP retransmits missing segments, assuming earlier segments were received correctly.

HTTP

HTTP/1.0

  • It uses a separate TCP connection for each request/response cycle, meaning that each request has to wait for a response before the next request can be sent.
  • HTTP/1.0 lacks persistent connections, resulting in high latency and slower page loading times, especially for websites with multiple resources (e.g., images, scripts, stylesheets).

HTTP/1.1

  • Adds support for persistent connection (pipelining), allowing multiple request/response to be sent without waiting for each response, over a single TCP connection.
  • However, the response might be out-of-order. This may cause problem in processing the responses. Since HTTP is a stateless protocol, the client has no way to match the requests with the responses. It is reliant on the order the responses came.
  • This also means all requests must wait for the first request to complete, even though they may have no relation. This problem is known as the Head of line blocking.

HTTP/2

  • It introduces multiplexing, enabling multiple requests and responses to be sent concurrently over a single connection. Multiple streams of data can be transmitted concurrently within a single connection. Responses can be interleaved as they become available, and the client can associate each response with its corresponding request using stream identifiers. This resolves the issues with pipelining in HTTP/1.1.
  • HTTP/2 supports server push, where the server can proactively send additional resources to the client without waiting for explicit requests.
  • It includes header compression, reducing the overhead of repetitive header data, and enables more efficient bandwidth utilisation.

Python

Global interpreter lock

  • Is a mutex that only allows 1 thread to hold the control of the python interpreter.
  • Only 1 thread can execute even in a multi threaded architecture with more than 1 cpu core.
  • Was intended to solve the multiple reference count problem…
  • It means that objects created in Python have a reference count variable that keeps track of the number of references that point to the object. When this count reaches zero, the memory occupied by the object is released.
  • To fix this, run multiple processes instead of threads. Each python process has its own interpreter and memory space.

What is the returned value of the http header when the request is 300?

The HTTP response header typically contains important information about the response, including the status code and additional headers. In the case of a 300-level status code, there are several possible redirection responses, each with its own specific headers:

  1. 300 Multiple Choices: The server is offering multiple options for the requested resource. The response header may include the Location header, which specifies the URLs of the available alternatives. The client can then choose which resource to retrieve.
  2. 301 Moved Permanently: The requested resource has been permanently moved to a new URL. The response header usually includes the Location header, indicating the new URL where the resource can be found. The client should update its bookmarks or links to use the new URL for future requests.
  3. 302 Found (or 302 Found/Temporary Redirect): The requested resource is temporarily available at a different URL. The response header typically contains the Location header, indicating the temporary URL. The client should use the new URL for the current request but can continue to use the original URL for future requests.
  4. 303 See Other: The server is redirecting the client to a different URL to retrieve the requested resource. The response header often includes the Location header, specifying the new URL. Additionally, the response may include a GET request to the new URL in the body or a Location header with a GET method.
  5. 307 Temporary Redirect: Similar to a 302 status code, the requested resource is temporarily available at a different URL. The response header usually contains the Location header, indicating the temporary URL. The client should use the new URL for the current request but can continue to use the original URL for future requests.
  6. 308 Permanent Redirect: Similar to a 301 status code, the requested resource has been permanently moved to a new URL. The response header typically includes the Location header, indicating the new URL where the resource can be found. The client should update its bookmarks or links to use the new URL for future requests.

How do languages handle function binding for multiple-class inheritance that involves the same function signature?

All methods in java are virtual by default. That means that any method can be overridden when used in inheritance, unless that method is declared as final or static.

In C++, we need to declare functions with virtual keyword.

class Shape{
virtual getArea(){
}
}

class Rectangle{
getArea(){
}
}

class Square{
getArea(){
}
}

Shape defines a method getArea(). This method is inherited by its derived classes, Rectangle and Square. When you create an instance of Square and assign it to a variable of type Shape, like Shape s = new Square();, you are using polymorphism to treat the Square object as an instance of the Shape class.

When you call s.getArea();, the method binding is determined by the actual type of the object at runtime, which is Square.

The dynamic dispatch mechanism ensures that the correct getArea() method implementation is called. In this case, the getArea() method of the Square class will be invoked because the actual object is of type Square.

Under the hood

When you create a new Square object with the statement new Square(), memory is allocated on the heap to store the Square object's data and methods. The memory allocation includes space for the getArea() method.

When you declare the variable Shape s and assign new Square() to s , memory is allocated on the stack to store the reference to the Square object.

When you call s.getArea(), the compiler knows that s is of type Shape based on the variable declaration. However, the method binding happens at runtime. The runtime environment examines the actual type of the object, which is Square in this case, by following the reference stored in s

The dynamic dispatch mechanism determines the appropriate getArea() method to invoke.

Shape will have VTable. Square will have VTable. VTable will be created for each object with virtual functions. These tables are lookup tables to bind the virtual functions. The vtable contains the addresses of the methods specific to the class.

Vtable of Shape will have getArea() which points to Shape implementation. VTable from Square will have getArea() which points to Square implementation. (Any other non virtual function will point to base Shape class).

Once the correct getArea() method is located through the vtable, the method's implementation is executed.

C++ uses vtables (virtual function tables) to store the addresses of virtual functions for each class. Each object of a class with virtual functions contains a hidden vpointer (virtual pointer) that points to its corresponding vtable.

VTables are static arrays, all the object instances point to the same VTable.

https://www.youtube.com/watch?v=47ZP-0iBicI&ab_channel=KeertiPurswani

Difference of Polymorphism in Java and C++

Java does not expose or provide direct access to vtables. Instead, Java’s runtime environment uses method tables (also known as virtual method tables) internally to store references to method implementations. The method table is associated with the object’s class, not the object itself.

Vtable (used in C++):

  • Associated with each class that has virtual functions.
  • Contains function pointers or addresses that point to the implementations of virtual functions.
  • Each object has a hidden vpointer that points to its specific vtable.
  • The vtable is specific to each class and allows for dynamic dispatch of virtual functions.
Shape* s1 = new Square();
Shape* s2 = new Square();

The vpointer within each Square object will point to the same vtable associated with the Square class.

  1. There is one vtable per class (in this case, the Square class).
  2. Each object of the Square class (including s1 and s2) will have it’s own separate vpointer that points to the same vtable.

Method Table (used in Java):

  • Associated with each class.
  • Contains references to method implementations for all methods defined in that class.
  • Objects in Java do not have a direct reference to the method table; they have a hidden reference to their class, known as the “runtime class” or “class object.” The hidden reference to the class allows the Java runtime to locate the method table associated with that class.
  • The method table is shared among all objects of the same class and enables dynamic method dispatch based on the object’s class.

In summary, in C++, each object has its own vpointer and shared class level vtable, while in Java, objects have a hidden reference to their class. Objects in java do not have access to the method table.

Why do we still need RAM if we can just use Cache, which is faster?

Cache is a small, high-speed memory that is located closer to the CPU (Central Processing Unit) than RAM. Its purpose is to store frequently accessed instructions and data to speed up the CPU’s access time. The cache works on the principle of locality, exploiting the fact that programs often access the same data or instructions multiple times within a short period.

RAM (Random Access Memory), on the other hand, is a larger and slower type of memory compared to cache. It serves as the main working memory for the computer, storing the data and instructions that are actively being used by the CPU and other components. RAM allows for random access, meaning that any piece of data can be accessed quickly regardless of its location. Unlike cache, RAM is not as closely integrated with the CPU, so its access times are slower.

(1) The key reason we still need RAM in computer systems is its larger capacity. Cache memory is typically much smaller in size, often measured in megabytes, while RAM is measured in gigabytes or even terabytes. (2) Cache memory is also more expensive to manufacture, so it is not feasible to provide cache memory with capacities equivalent to RAM.

Furthermore, cache memory is designed to be fast but expensive, while RAM is slower but more cost-effective. The hierarchy of memory levels, including cache, RAM, and storage devices like hard drives or SSDs, allows for a balance between speed and cost. Data is moved between these memory levels based on its frequency of access and the need for storage capacity.

What are some ways for two processes in the same machine to communicate?

Message queues

Can use either linked list or circular array based structure.

In LL approached, we can have a head and tail pointer to point to the first and last node in the list. Each node is a message. Adding new messages adds to the list while consuming message removes from the head.

In Array based approach, we use a fixed size array. There are 2 maintained indices, front and back. When new message is added, the back index is incremented. When message is consumed, the front index is incremented. For wrap around behaviour, the queue is like a circular array.

  1. Initial state:
  • Queue: [Message1]
  • Head pointer: points to Message1
  • Tail pointer: points to Message1

2. Consumer removes the message:

  • The consumer dequeues the message from the queue.
  • After the removal, the queue becomes empty.
  • The head and tail pointers need to be updated accordingly.

3. Updated state:

  • Queue: []
  • Head pointer: null (or undefined)
  • Tail pointer: null (or undefined)

The absence of both head and tail pointers indicates that the queue is in an empty state.

When a producer then wants to add a message, the head and tail pointer will be updated to point to the newly added message.

Pipes

Pipes can also be used for a simple producer-consumer communication pattern. In this case, the producer process writes data into the pipe, and the consumer process reads it.

Based on stream oriented communication. It provide unidirectional flow of data. Different from queue where we can have multiple processes (producer/consumer) to exchange data. Pipes are generally best suited for communication between related processes, such as a parent and its child processes.

Pipes provides a fixed size buffer. They give sequential access to the data, like a stream of bytes.

Pipes are used for synchronous communication, where the writer process blocks until the reader process consumes the data. If the pipe’s internal buffer is full, meaning it has reached its capacity, a write operation on the pipe will block until there is space available for writing. Queue supports asynchronous.

Shared memory

Create a shared memory segment and attach to the address space of each process that needs access to it.

Each process maps the shared memory segment to its own address space.

Once the mapping is done, processes can read and write to the space as if it belongs to their own address space.

However, this requires synchronisation mechanisms like semaphore, mutexes.

The End :)

--

--

LiveRunGrow