JavaScript TypeScript React FE Interview Qns

LiveRunGrow
34 min readApr 20, 2023

--

View of Waikiki beach from Diamond Head. Photo taken in 2022 by me in Hawaii :) I think that it will be a dream come true if i can own a house overseeing the big blue ocean. It’s really beautiful.

Interview Qns

What’s the diff between functional and class components? Are they interchangeable?

They achieve the same results.

Functional components can be used to define stateless components that don’t have any state.

  • With the introduction of Hooks in React, functional components can now also handle state and lifecycle methods, so the distinction between functional and class components is becoming less important.
function Greeting(props) {
return <h1>Hello, {props.name}!</h1>;
}

Class components are classes that extend the `React.Component` class. They have a render() method that returns a React element. They also have state and lifecycle methods (not present in functional component, without hooks). They are more complex but offer more flexibility and control over the component’s behavior.

class Greeting extends React.Component {
render() {
return <h1>Hello, {this.props.name}!</h1>;
}
}

In terms of interchangeability, both are interchangeable. However, class components have access to lifecycle methods like componentDidMount, while functional components can use the useEffect Hook to achieve similar functionality.

Summary

  1. Syntax: Functional components are defined using a function, while class components are defined using a class that extends React.Component.
  2. State: Class components have access to the state object, which allows them to manage component state and trigger re-renders when necessary. Functional components did not have access to state prior to React 16.8, when the useState hook was introduced.
  3. Lifecycle methods: Class components have access to a number of lifecycle methods (such as componentDidMount and componentWillUnmount) that allow you to perform certain actions when the component mounts, updates, or unmounts. Functional components do not have access to these methods (prior to React 16.8), but they can use the useEffect hook to achieve similar functionality.
  4. Context and refs: Class components have access to the this keyword, which allows them to access class instance properties such as this.context (for accessing context values) and this.refs (for accessing DOM elements). Functional components do not have access to this, but they can use the useContext hook to access context values and the useRef hook to create refs.
  5. Performance: Prior to React 16.8, functional components were generally faster than class components because they were simpler and had less overhead. However, with the introduction of hooks, functional components can now have state and lifecycle methods, which can make them more similar to class components in terms of performance.

Void, unknown, never, any -> What’s the diff?

void is a type that represents the absence of a value. It's often used as the return type of functions that don't return a value or have side effects.

unknown is a type that represents a value whose type is unknown at compile time. It's often used when you don't know the type of a value, such as when you're working with data from an external source or when you're writing a generic function.

function parseJSON(jsonString: string): unknown {
return JSON.parse(jsonString);
}

never is a type that represents a value that will never occur. It's often used as the return type of functions that throw an error or enter an infinite loop.

function throwError(message: string): never {
throw new Error(message);
}

any is a type that represents any type of value. It's often used when you're working with dynamic or untyped data,

?? or ||

The ?? operator is called the nullish coalescing operator and is used to provide a default value when a variable is null or undefined.

const foo = null;
const bar = foo ?? 'default';
console.log(bar); // 'default'

The || operator is called the logical OR operator and is used to provide a default value when a variable is falsy (i.e., null, undefined, 0, false, '', or NaN).

const foo = null;
const bar = foo || 'default';
console.log(bar); // 'default'

So, the main difference between ?? and || is that ?? checks for null or undefined values, while || checks for any falsy values. Use ?? when you want to provide a default value for null or undefined variables, and use || when you want to provide a default value for any falsy variables.

The value of foo = false will result in different result for ?? and ||

Undefined means the variable has been declared, but its value has not been assigned. Null means an empty value or a blank value.

Await sync?

The async/await is a way of writing asynchronous code in a more synchronous style.

// Define an asynchronous function that returns a promise
async function getData() {
// Use the 'await' keyword to wait for a promise to resolve
const response = await fetch('https://api.example.com/data');
// Use the 'await' keyword again to wait for the response to be parsed as JSON
const data = await response.json();
// Return the data
return data;
}
// Call the function and handle the result with a 'then' callback
getData().then(data => {
console.log(data);
}).catch(error => {
console.error(error);
});

In this example, the getData() function is defined as an async function, which means it returns a promise that will resolve with the return value of the function. The function uses the await keyword to wait for the fetch() request to resolve and for the response to be parsed as JSON. Once the data is obtained, it is returned from the function.

When the fetch() function is called, it returns a promise that resolves to a Response object. The await keyword is used to pause the execution of the getData() function until the promise is resolved and the Response object is available.

Once the Response object is available, the json() method is called on it to parse the response body as JSON. The json() method also returns a promise that resolves to the parsed JSON data. Again, the await keyword is used to pause the execution of the getData() function until the promise is resolved and the parsed JSON data is available.

Once the parsed JSON data is available, it is assigned to the data variable and returned from the function. At this point, the getData() function is complete and the value of the promise it returns is set to the value of the data variable.

The then() method is used to attach a callback function to the promise returned by getData(). The callback function will be called with the value of the promise, which in this case is the parsed JSON data. If there are any errors during the execution of the getData() function, the catch() method will be called with the error object.

If you have the code

getData().then(data => {
console.log(data);
}).catch(error => {
console.error(error);
});

console.log("____");

the console.log("____") statement will be executed before the console.log(data) statement because getData() is an asynchronous function that returns a promise. When getData() is called, it starts running in the background and immediately returns a promise.

The then() and catch() methods are used to attach callback functions to the promise that will be called when the promise is resolved or rejected. However, these callback functions do not block the execution of the main thread, so any code that comes after the call to getData() will continue to execute while getData() is running in the background.

function getData() {
const a = await(xx());
const b = await(yy());
const c = await(zz());

return {a, b, c};
}
// How to improve above?
async function getData() {
const [a, b, c] = await Promise.all([xx(), yy(), ww()]);
return {a, b, c};
}

This code uses Promise.all() to simultaneously run xx(), yy(), and ww() in parallel (Faster). When all three promises have resolved, their return values are destructured and assigned to the variables a, b, and c, respectively.

Promises vs Async-Await

Promises and async/await are both used in JavaScript for handling asynchronous operations, but they are different concepts.

Promises are a way to handle asynchronous operations in JavaScript. A promise is an object that represents a value that may not be available yet. When a promise is created, it is in a “pending” state. Asynchronous operations that are associated with the promise are started, and the promise remains in the “pending” state until the operation completes.

Once the operation completes, the promise is either “fulfilled” with a value or “rejected” with an error. Fulfillment means that the operation completed successfully, and the promise’s value is now available. Rejection means that an error occurred during the operation, and the promise’s value is not available.

async/await is a newer way to handle asynchronous operations in JavaScript. It is built on top of Promises and provides a more readable syntax for writing asynchronous code. The async keyword is used to define a function that returns a Promise, and the await keyword is used to pause the execution of the function until a Promise is fulfilled or rejected.

Using async/await can make asynchronous code look more like synchronous code and can be easier to read and understand than using Promises directly. However, Promises are still a fundamental part of JavaScript's asynchronous programming model, and many libraries and APIs still use Promises for handling asynchronous operations.

const myPromise = new Promise((resolve, reject) => {
// Perform some asynchronous operation
// ...

// If the operation is successful, call the resolve function
resolve('Operation completed successfully');

// If the operation fails, call the reject function
// reject('Operation failed');
});

In this example, myPromise is a new Promise object that is created with the Promise constructor. The function passed to the constructor takes two arguments, resolve and reject, which are functions that can be called to either fulfill or reject the promise.

Inside the function, you can perform some asynchronous operation, such as fetching data from a server or reading a file from disk. Once the operation is complete, you can call the resolve function with the result of the operation to fulfill the promise. If the operation fails, you can call the reject function with an error message or object to reject the promise.

Once the Promise object is created, you can use its then() method to handle the fulfilled value, or its catch() method to handle any errors that occurred during the operation.

Difference between for of loop and for in loop?

The for...in loop is a JavaScript loop that allows you to iterate over the properties of an object. It can be used to loop over the keys of an object and perform some operation on each key or property.

const person = {
name: 'John',
age: 30,
email: 'john@example.com'
};

for (const key in person) {
console.log(key + ': ' + person[key]);
}

The for...in loop can be useful for working with objects, but it has some limitations. One of the main issues is that it iterates over all enumerable properties, including those inherited from the object's prototype chain. This can lead to unexpected behavior if you're not careful.

To avoid iterating over inherited properties, you can use the hasOwnProperty() method to check whether each key belongs to the object itself, or is inherited from its prototype chain:

for (const key in person) {
if (person.hasOwnProperty(key)) {
console.log(key + ': ' + person[key]);
}
}

In this example, the hasOwnProperty() method is used to check whether each key is owned by the person object itself, before logging it to the console.

It’s important to note that the for...in loop is not recommended for iterating over arrays, as it can produce unexpected results. If you need to iterate over the values of an array, it's generally better to use a for...of loop or a traditional for loop.

const numbers = [1, 2, 3, 4, 5];

for (const num of numbers) {
console.log(num);
}

The for...of loop is often preferred over the traditional for loop when iterating over iterable objects, as it provides a more concise and readable syntax, and also handles some edge cases such as skipping non-enumerable properties. However, it is worth noting that the for...of loop cannot be used to iterate over plain objects, as they are not iterable.

Examples of iterable objects in JavaScript include:

  • Arrays
  • Strings
  • Maps
  • Sets
  • Generators
  • TypedArrays

When will rerender happen?

In React, a component re-renders when its state or props change. This means that any time you call the setState() method on a component, or the parent component passes new props to it, React will compare the previous and new state or props and decide whether to re-render the component or not.

When a component re-renders, React will first call the render() method to compute a new virtual DOM representation of the component, and then compare it with the previous virtual DOM. If there are any differences between the two, React will update the actual DOM to reflect the changes.

It’s worth noting that not all state or props changes trigger a re-render. React uses a mechanism called “shallow comparison” to determine if a component’s state or props have changed. This means that if the new state or props are not deeply equal to the previous ones, a re-render will be triggered. However, if the new state or props are equal to the previous ones, no re-render will happen.

Yes, the useEffect() hook can cause a re-render of the component it is called in. This is because useEffect() is executed after the component has rendered, and any state changes that occur during the execution of the effect function will trigger a re-render of the component.

However, by default, the effect function will not cause a re-render if it only modifies state or props that are not used in the component’s rendering. This is because React uses a mechanism called “batching” to group multiple state updates together and minimize the number of re-renders that occur.

If you need to conditionally execute an effect based on some value, you can use the dependency array argument of useEffect(). When you pass a dependency array to useEffect(), React will only execute the effect if any of the values in the array have changed since the last render. If the dependency array is empty, the effect will only be executed once, after the first render.

Overall, the useEffect() hook is a powerful tool for managing side effects in React components, and can help you write cleaner, more efficient code. However, it's important to use it judiciously and understand its effects on the rendering of your components.

Use of context in React?

The React Context API is useful for providing global data or behavior to components in your application without having to pass that data or behavior through multiple levels of component hierarchy via props.

Here are some common scenarios where using the Context API may be beneficial:

  1. Theming: If your application has a theme, you can use the Context API to make the theme available to all child components without passing it through props.
  2. User authentication: If your application requires user authentication, you can use the Context API to make the user’s authentication status available to all child components without passing it through props.
  3. Localization: If your application needs to support multiple languages, you can use the Context API to make the user’s preferred language available to all child components without passing it through props.
  4. Data caching: If your application needs to cache data that’s expensive to fetch, you can use the Context API to store the cached data and make it available to child components.

In general, you should use the Context API when you have data or behavior that needs to be shared across many components, especially if passing that data or behavior through props would become unwieldy or difficult to maintain. However, you should also be aware that using context can make it harder to reason about how data flows through your application, so you should use it judiciously and consider alternative solutions if the complexity becomes too great.

What is DOM?

Document Object Model.

It represents the webpage in a hierarchical structure.

DOM describes the logical structure of documents and how one can access and manipulate them.

These documents are usually treated as a tree structure in which every node is an object that represents a specific part of the document.

What’s virtual dom vs dom?

The DOM (Document Object Model) is a programming interface for web documents. It represents the page so that programs can change the document structure, style, and content. The DOM represents the document as nodes and objects. That way, programming languages can interact with the page. When a web page is loaded, the browser creates a DOM for the page.

The Virtual DOM, on the other hand, is a concept used by many modern JavaScript frameworks and libraries, including React. The Virtual DOM is a lightweight copy of the actual DOM. When a change is made to the UI, instead of directly updating the DOM, the change is first made to the Virtual DOM. The Virtual DOM then compares the new state to the previous state to determine the minimum number of changes needed to update the actual DOM. Finally, the Virtual DOM updates the actual DOM with the minimum number of changes required, resulting in a faster and more efficient UI update.

Why faster?

Updating the Virtual DOM is faster than updating the actual DOM because the Virtual DOM allows for batch updates and selective updates.

When an update is made to the UI in a web application, it can result in multiple changes to the DOM. For example, updating a component might result in changes to multiple elements within that component, as well as changes to other components on the page. Updating the DOM for each individual change can be slow and inefficient, especially for complex web applications.

The Virtual DOM allows for batch updates by making changes to a lightweight copy of the DOM, rather than the actual DOM. This means that multiple changes can be made to the Virtual DOM at once, and then applied to the actual DOM as a single batch update. Batch updates are faster and more efficient than individual updates, as they minimize the amount of work required to update the DOM.

In addition to batch updates, the Virtual DOM also allows for selective updates. When an update is made to the Virtual DOM, it is compared to the previous state of the Virtual DOM to determine which parts of the actual DOM need to be updated. Only the parts of the actual DOM that have changed are updated, rather than the entire DOM tree. This selective updating reduces the amount of work required to update the DOM, resulting in faster and more efficient updates.

Overall, updating the Virtual DOM is faster than updating the actual DOM because it allows for batch updates and selective updates, which minimize the amount of work required to update the DOM.

Interface VS Types

interface PointInterface {
x: number
y: number
}

type PointType = {
x: number
y: number
}

/////////////////
type Animal = {
name: string;
};
type Bear = Animal & {
likesHoney: boolean;
}; // Adding an additional property likesHoney
// You can do the same thing with interfaces
interface Pet {
name: string;
}
interface Dog extends Pet {
doesTricks: boolean;
}

//////////////////Example for merging//////////////////
// Note that you can extend interfaces
interface Whale {
name: string;
canSwim: boolean;
}
// Note, no error.
interface Whale {
isMammal: boolean;
}
const blueWhale: Whale = {
name: "blueWhale",
canSwim: true,
isMammal: true
};
console.log(blueWhale);
// But not with types.
type Fish = { canSwim: boolean };
type Fish = { isMammal: boolean };
// But that makes sense since it uses equal to
// create the type.
  1. You can only use Union with Type, but not Interface. Eg, type xOrYPoint = xPoint | yPoint;
  2. You can do AND for both Type and Interface. Eg, & for Types and extends for Interface
  3. Interfaces lets you merge declarations and Types will give you an error.

For more notes

Refer to:

The above were the notes i copied from the weekly frontend classes conducted by a colleague in my previous company.

What is a Single Page Application?

Single page application → You go to the server and you get index.html from it. From then on, when a user goes to another page, it feels that they are going to another page. But actually, we are just changing the DOM and hooked to the browser history blah blah. The user experience feels like a multi page application but in actual fact, technically, it’s just swapping DOM in and out.

What is Babel?

Babel is a very famous transpiler that basically allows us to use future JavaScript in today’s browsers. In simple words, it can convert the latest version of JavaScript code into the one that the browser understands.

A Transpiler is a tool that is used to convert source code into another source code that is of the same level. That is why it is also known as a source-to-source compiler. Both the codes are equivalent in nature, considering the fact that one works with the specific version of the browser and one doesn’t.

It is also good to note that a compiler is totally different from a transpiler as the transpiler converts source code into another source code at the same abstraction level, whereas the compiler converts code into a lower level code generally. Like in Java, the source code is converted to byte Code which is lower level and not equivalent.

The main reason we need babel is that it gives us the privilege to make use of the latest things JavaScript has to offer without worrying about whether it will work in the browser or not.

What is JSX?

It is a made up language that the React team came up with to complement React. It is syntactic sugar for React developers to easily create components.

We use transpilers like Babel to convert JSX to JavaScript.

Here we have a JSX snippet:

<div>
<h1>Hello JSX</h1>
<h2 label="screen">Sub heading</h2>
</div>

When Babel converts above code to JavaScript, it makes use of React.createElement(). This method accepts 3 parameters.

  1. Name of component
  2. Attributes of the component
  3. Children of the component

Here is how the JavaScript output of above code looks like:

React.createElement("div", {}, [
React.createElement("h1", {}, "Hello JSX"),
React.createElement(
"h2",
{
label: "screen",
},
"Sub heading"
),
]);

What is React?

JSX is not React. JSX is a syntax extension used with React.

React is a JavaScript library for building user interfaces, while JSX is a way to write HTML-like code within JavaScript. React uses JSX to define the structure and behavior of components in a declarative manner, making it easier to manage complex UIs.

JSX is not a requirement for using React, but it is commonly used because it provides a more intuitive and readable way to define components. Without JSX, developers would have to write JavaScript code to define the structure and content of each component, which could be more verbose and difficult to read.

In summary, JSX is a syntax extension that is commonly used with React to define components in a declarative manner, but it is not React itself. React is a JavaScript library for building user interfaces.

React can be used with both TypeScript and JavaScript, as it is a library that is language agnostic.

JavaScript is the primary language used with React, as React is built on top of JavaScript and uses JavaScript to define the behavior of components. But TypeScript can provide additional type safety and make the development process more robust.

React is a library that works by using a virtual DOM (Document Object Model) to manage the state and render the UI.

Reduce in JS

The reduce() method in JavaScript is used to iterate over an array and accumulate a single value based on the elements of the array. It applies a callback function to each element of the array, updating an accumulator value at each iteration. The reduce() method takes two arguments: the callback function and an optional initial value for the accumulator.

General syntax

array.reduce(callback, initialValue);
const numbers = [1, 2, 3, 4, 5];

const sum = numbers.reduce((accumulator, currentValue) => {
return accumulator + currentValue;
}, 0);

console.log(sum); // Output: 15

You can also use reduce() to perform operations other than summing. For example, you can use it to find the maximum or minimum value in an array, concatenate strings, or perform custom calculations.

const numbers = [10, 5, 20, 8, 15];

const max = numbers.reduce((accumulator, currentValue) => {
return Math.max(accumulator, currentValue);
}, -Infinity);

console.log(max); // Output: 20

What is scope in JS?

It is basically a collection of rules for how variables are accessed and variables itself.

What is Content Security Policy?

Content Security Policy (CSP) is a security feature in web browsers that helps mitigate various types of attacks, such as cross-site scripting (XSS) and data injection. It allows website operators to define a set of rules that control the sources from which certain types of content can be loaded and executed on a web page.

This policy is communicated to the browser using an HTTP header or a <meta> tag.

These directives restrict the browser from loading content from unauthorized or potentially malicious sources.

For example, a CSP policy can specify that scripts can only be loaded from the same domain as the web page (self), or from specific trusted domains (https://example.com), while blocking scripts from all other sources.

Content-Security-Policy: default-src 'self' https://example.com;

In this example, the default-src directive specifies that the default source for all types of content should be the same origin ('self'), and scripts can also be loaded from https://example.com.

By setting a CSP policy, website operators can effectively mitigate the risk of XSS attacks by preventing the execution of malicious scripts from unauthorized sources. However, it’s important to configure the policy carefully to avoid blocking legitimate resources on the web page.

What is Cross-Site Scripting (XSS)?

Cross-Site Scripting (XSS) is a type of security vulnerability that occurs when an attacker injects malicious code, typically in the form of client-side scripts, into a web application. The injected code is then executed by the victim’s browser, potentially leading to unauthorized actions or data theft.

Explain CORS

Cross-origin resource sharing.

Cross-Origin Resource Sharing (CORS) is a protocol that enables scripts running on a browser client to interact with resources from a different origin.

It provides and extends flexibility to the SOP (Same-Origin Policy). A same-origin policy restricts a website’s ability to access resources outside its source domain. For example, if a JavaScript app wanted to call an API (Application Programming Interface) running on another domain, it would be blocked and prevented from doing so because of the SOP. Due to restrictions caused by the same-origin policy, CORS was introduced.

When a web browser makes a cross-origin request, it sends an additional HTTP request called a preflight request (using the OPTIONS method) to the server hosting the resource. This preflight request is used to determine if the server allows the cross-origin request and specifies which HTTP methods and headers are allowed.

If the server has been correctly configured to allow cross-origin requests, it will respond to the preflight request with specific headers, indicating that it supports CORS. These headers include:

  1. Access-Control-Allow-Origin: Specifies the origin (domain) that is allowed to access the resource. The server can set this header to “*” to allow access from any domain or specify specific origins.
  2. Access-Control-Allow-Methods: Specifies the HTTP methods (e.g., GET, POST, PUT) that are allowed for cross-origin requests.
  3. Access-Control-Allow-Headers: Specifies the HTTP headers that are allowed in the request.
  4. Access-Control-Allow-Credentials: Indicates whether the resource supports credentials (such as cookies or HTTP authentication) for cross-origin requests.

The presence of these headers in the server’s response allows the web browser to determine whether it is allowed to proceed with the cross-origin request or if it should be blocked due to security restrictions. The server hosting the resource needs to include these headers in its response to support CORS and enable cross-origin requests from web browsers.

What is CDN?

A CDN is a group of geographically distributed proxy servers. A proxy server is an intermediate server between a client and the origin server. It helps to quickly deliver the content to the end users by reducing latency and saving bandwidth.

CDN mainly stores two types of data: static and dynamic.

Push CDN is appropriate for static content delivery, where the origin server decides which content to deliver to users using the CDN.

When users request web content in the pull CDN model, the CDN itself is responsible for pulling the requested content from the origin server and serving it to the users. Therefore, this type of CDN is more suited for serving dynamic content.

How many ways can an image be rendered?

  1. Img tag
  2. You can use CSS to set an image as the background of an element using the background-image property. This allows you to position and style the image within the element.
  3. CSS content property: When working with pseudo-elements (::before and ::after), you can use the CSS content property along with the url() function to insert an image as content. This technique is commonly used for decorative elements or icons.
  4. SVG image: Instead of using a raster image format like JPEG or PNG, you can use Scalable Vector Graphics (SVG) to render images. SVG is an XML-based format that allows for creating and displaying vector graphics that can scale without losing quality. SVG images can be embedded directly in HTML using the <svg> tag or referenced using the <object> or <embed> tags.
  5. …. many others…
.my-element::before {
content: url('path/to/image.jpg');
}

How to decrease page load size?

The best ways to decrease the page load time is

  • Image optimization
  • Browser cache
  • Compress and optimize content

State the elements of the CSS Box Model.

CSS Box Model consist of 4 elements

  • Content
  • Padding
  • Border
  • Margin
https://twitter.com/b0rk/status/1284132999940968454

What are Closures in JavaScript?

Closures in JavaScript are a feature where an inner function has access to the outer function’s variables.

function outer_func()
{
var b =10;
function inner_func(){
var a =20;
console.log(a+b);
}
return inner;
}

A closure has three scope chains –

  • Has access to the variable defined within its curly braces, which is its scope.
  • Has access to the outer functions’ variables.
  • Has the ability to access global variables.

More

JUSTIFY-CONTENT WILL ALIGN ITEMS ON THE MAIN AXIS.

ALIGN-ITEMS WILL ALIGN ITEMS ON THE CROSS AXIS.

Animate

.button{
transition: transform 1000ms; // this applies when the button bounces back down
}

.button:hover,
.button:focus-visible {
transform: translateY(-1.75rem);
transition: transform 250ms; // this applies when someone hovers over the button
}

Animations consist of two components, a style describing the CSS animation and a set of keyframes that indicate the start and end states of the animation’s style, as well as possible intermediate waypoints.

Routing

npm i react-router-dom

More about JS functions (From Leetcode)

Function Syntax

In JavaScript, there are two main ways to declare a function. One of which is to use the function keyword.

Basic Syntax

The syntax is:

function f(a, b) {
const sum = a + b;
return sum;
}
console.log(f(3, 4)); // 7

In this example, f is the name of the function. (a, b) are the arguments. You can write any logic in the body and finally return a result. You are allowed to return nothing, and it will instead implicitly return undefined.

Anonymous Function

You can optionally exclude the name of the function after the function keyword.

var f = function(a, b) {
const sum = a + b;
return sum;
}
console.log(f(3, 4)); // 7

Immediately Invoked Function Expression (IIFE)

You can create a function and immediately execute it in Javascript.

const result = (function(a, b) {
const sum = a + b;
return sum;
})(3, 4);
console.log(result); // 7

Why would you write code like this? It gives you the opportunity to encapsulate a variable within a new scope. For example, another developer can immediately see that sum can't be used anywhere outside the function body.

Functions Within Functions

A powerful feature of JavaScript is you can actually create functions within other functions and even return them!

function createFunction() {
function f(a, b) {
const sum = a + b;
return sum;
}
return f;
}
const f = createFunction();
console.log(f(3, 4)); // 7

In this example, createFunction() returns a new function. Then that function can be used as normal.

Function Hoisting

JavaScript has a feature called hoisting where a function can sometimes be used before it is initialized. You can only do this if you declare functions with the function syntax.

function createFunction() {
return f;
function f(a, b) {
const sum = a + b;
return sum;
}
}
const f = createFunction();
console.log(f(3, 4)); // 7

In this example, the function is returned before it is initialized. Although it is valid syntax, it is sometimes considered bad practice as it can reduce readability.

Closures

An important topic in JavaScript is the concept of closures. When a function is created, it has access to a reference to all the variables declared around it, also known as it’s lexical environment. The combination of the function and its enviroment is called a closure. This is a powerful and often used feature of the language.

function createAdder(a) {
function f(b) {
const sum = a + b;
return sum;
}
return f;
}
const f = createAdder(3);
console.log(f(4)); // 7

In this example, createAdder passes the first parameter a and the inner function has access to it. This way, createAdder serves as a factory of new functions, with each returned function having different behavior.

Arrow Syntax

The other common way to declare functions is with arrow syntax. In fact, on many projects, it is the preferred syntax.

Basic Syntax

const f = (a, b) => {
const sum = a + b;
return sum;
};
console.log(f(3, 4)); // 7

In this example, f is the name of the function. (a, b) are the arguments. You can write any logic in the body and finally return a result. You are allowed to return nothing, and it will instead implicitly return undefined.

Omit Return

If you can write the code in a single line, you can omit the return keyword. This can result in very short code.

const f = (a, b) => a + b;
console.log(f(3, 4)); // 7

Differences

There are 3 major differences between arrow syntax and function syntax.

  1. More minimalistic syntax. This is especially true for anonymous functions and single-line functions. For this reason, this way is generally preferred when passing short anonymous functions to other functions.
  2. No automatic hoisting. You are only allowed to use the function after it was declared. This is generally considered a good thing for readability.
  3. Can’t be bound to this, super, and arguments or be used as a constructor. These are all complex topics in themselves but the basic takeaway should be that arrow functions are simpler in their feature set. You can read more about these differences here.

The choice of arrow syntax versus function syntax is primarily down to preference and your project’s stylistic standards.

Rest Arguments

You can use rest syntax to access all the passed arguments as an array. This isn’t necessary for this problem, but it will be a critical concept for many problems. You can read more about ... syntax here.

Basic Syntax

The syntax is:

function f(...args) {
const sum = args[0] + args[1];
return sum;
}
console.log(f(3, 4)); // 7

In this example the variable args is [3, 4].

Why

It may not be immediately obvious why you would use this syntax because you can always just pass an array and get the same result.

The primary use-case is for creating generic factory functions that accept any function as input and return a new version of the function with some specific modification.

By the way, a function that accepts a function and/or returns a function is called a higher-order function, and they are very common in JavaScript.

For example, you can create a logged function factory:

function log(inputFunction) {
return function(...args) {
console.log("Input", args);
const result = inputFunction(...args);
console.log("Output", result);
return result;
}
}
const f = log((a, b) => a + b);
f(1, 2); // Logs: Input [1, 2] Output 3

Solutions to Problem 2667

Write a function createHelloWorld. It should return a new function that always returns "Hello World".

Now let’s apply these different ways of writing JavaScript functions to solve this problem.

Function Syntax

var createHelloWorld = function() {
return function() {
return "Hello World";
}
};

Arrow Syntax

var createHelloWorld = function() {
return () => "Hello World";
};

Arrow Syntax + Rest Arguments

var createHelloWorld = function() {
return (...args) => "Hello World";
};

More on Generator Functions

Overview

This problem presents an interesting exploration of JavaScript generator functions, with the objective of writing a generator function that yields the Fibonacci sequence. This sequence is a series of numbers in which each number is the sum of the two preceding ones, generally starting with 0 and 1. Therefore, the sequence initiates as follows: 0, 1, 1, 2, 3, 5, 8, 13 and so forth.

JavaScript generator functions are special types of functions that can control the execution flow within a function, including the ability to pause and resume at specific points. This characteristic makes them ideal for generating potentially infinite sequences like the Fibonacci sequence. By using the yield keyword, a generator function can produce a sequence of values over time, instead of computing them all at once. It can thus generate an infinite data stream, creating each value only when needed. This feature provides significant performance benefits and allows for the creation of infinite sequences without overloading memory resources.

Understanding the yield keyword in JavaScript and the concept of maintaining state between function invocations are critical to address this problem. Also, getting acquainted with how JavaScript's .next() method operates with generator objects is important as it is used to retrieve the next Fibonacci number in the sequence.

If you’re not yet familiar with the Fibonacci sequence, consider starting with this problem: Fibonacci Number. This will provide a solid understanding of the sequence, which is crucial for this problem.

Finally, for a more detailed study on JavaScript functions, consider reading the Create Hello World Function Editorial. This article provides valuable insights into the behavior and usage of functions in JavaScript.

JavaScript Generator Functions

Generator functions in JavaScript are special types of functions that can be paused and resumed, enabling them to yield multiple outputs on different invocations. They are defined using the function* keyword, and they return a generator object when invoked.

This generator object is special because it conforms to both the iterable and iterator protocols in JavaScript:

  • The iterable protocol allows JavaScript objects to define or customize their iteration behavior. An object is iterable if it implements the @@iterator method, meaning it has a property with a Symbol.iterator key.
  • The iterator protocol is a protocol that defines a standard way to produce a sequence of values. An object is an iterator when it implements a next() method.

In other words, the generator object returned by a generator function is an iterator and can be used directly in a for...of loop and other JavaScript constructs that expect an iterable.

Here’s an example of using a generator function with the iterator protocol:

const gen = [1,2,3][Symbol.iterator]();
console.log(gen.next()); // { value: 1, done: false }

For a deeper understanding of the iteration protocols in JavaScript, check out the MDN reference on Iteration Protocols.

The yield keyword is used within the generator function to specify the values to be returned during its execution. Each time yield is encountered, the function's execution is paused, and the yielded value is emitted. The next invocation of the generator's next() method resumes the execution from where it was last paused.

An example of a simple generator function in JavaScript:

function* simpleGenerator() {
yield 1;
yield 2;
yield 3;
}

const gen = simpleGenerator();
console.log(gen.next().value); // 1
console.log(gen.next().value); // 2
console.log(gen.next().value); // 3

simpleGenerator is a generator function that yields the numbers 1, 2, and 3. When we invoke simpleGenerator, it returns a generator object. We then call the next() method on this object to retrieve the next value yielded by the generator function.

Maintaining State with JavaScript Generators

One of the key features of JavaScript generator functions is their ability to maintain state between invocations. This allows you to create functions that generate a series of related values over multiple calls, such as a sequence of numbers or a sequence of Fibonacci numbers.

When a generator function is invoked, it returns a generator object, but it doesn’t execute any of the function’s code immediately. Instead, the function’s code is executed on-demand, each time the generator’s next() method is invoked. This feature allows the generator to maintain its position in the code for subsequent calls, effectively preserving state between these calls.

function* countUp() {
let count = 0;
while (true) {
yield count++;
}
}
const gen = countUp();
console.log(gen.next().value); // 0
console.log(gen.next().value); // 1
console.log(gen.next().value); // 2

In this example, the countUp generator function yields an infinite series of incrementing numbers. Each time gen.next() is called, the function resumes execution from the last yield, using the current value of the count variable. This demonstrates how generators can maintain state between invocations.

The next() Method in JavaScript Generators

The next() method is a key part of the JavaScript generator function framework. When invoked on a generator object, it resumes the execution of the function until the next yield statement is encountered. The value yielded by the yield statement is returned as the value property of an object, which also includes a done property indicating whether the generator has completed execution.

Here’s an example demonstrating the use of the next() method:

function* simpleGenerator() {
yield 1;
yield 2;
return 3;
}
const gen = simpleGenerator();
console.log(gen.next()); // { value: 1, done: false }
console.log(gen.next()); // { value: 2, done: false }
console.log(gen.next()); // { value: 3, done: true }

Each call to gen.next() resumes the execution of the simpleGenerator function, returning an object that includes the yielded value and a flag indicating whether the function has completed its execution.

Iterators vs Generators

The concepts of iterators and generators are related and often used together in JavaScript, but they serve different purposes. It’s important to distinguish between them to understand their respective roles in managing sequences of data.

An iterator is a design pattern used to traverse a container and access the container’s elements. The iterator pattern decouples algorithms from containers; in some cases, algorithms are necessarily container-specific and thus cannot be decoupled.

In JavaScript, an iterator is an object which defines a sequence and potentially a return value upon its termination. Specifically, an object is an iterator when it implements a next() method with the following semantics:

  • On each call, it returns an object with two properties: value and done.
  • The value property is the value of the current item in the sequence.
  • The done property is a Boolean that is true if the last value in the sequence has already been produced and false otherwise.

How Do They Work Together?

When a generator function is called, it returns a generator object. This object is an iterator, meaning it has a next() method that can be called to produce a value from the generator.

Each time next() is called, the generator function's execution is resumed from its paused state, and it continues until it reaches the next yield expression. The value of the yield expression is returned from the next() method.

In conclusion, while the terms iterator and generator are related, they are not interchangeable:

  • Iterators are a concept and a pattern that allows you to traverse sequences of values.
  • Generators are a tool in JavaScript that helps create iterators with a special syntax. Generators can be paused and resumed, making it easier to create complex sequences because the function “remembers” its state.

Use Cases of Generators

Generators, with their ability to produce values on demand, can be employed effectively in various programming scenarios. Here are some of the prominent use cases:

Cancellation of Execution

Generators open the way for two-way communication between generator code and the “execution engine”. Not only can you pause execution, but you can also cancel it or completely alter how the generator code behaves based on the decisions of the “engine”. This unique advantage can be particularly useful when dealing with complex control flows or when you need to manage resources effectively.

function* taskRunner() {
let taskId = 0;
let cancelled = false;
while (!cancelled) {
cancelled = yield taskId++;
}
}
const tasks = taskRunner();
tasks.next(); // starts task 0
tasks.next(); // starts task 1
tasks.next(true); // cancels the tasks

In this generator function taskRunner, we generate a sequence of task IDs. The statement cancelled = yield taskId++; pauses execution and returns the current task ID. The generator then waits for the next invocation of next() before it continues.

The single yield statement cancelled = yield taskId++; in the loop demonstrates a key feature of generators: the ability to send data back into the generator. When next() is called, the value passed as an argument to next() is returned by yield. This allows the caller to send a signal (in this case, a cancellation signal) back into the generator.

By passing true to next(), we signal the generator to cancel the tasks. As a result, cancelled becomes true, the while loop ends, and the generator function stops generating new tasks. This showcases the two-way communication feature of generators, allowing external control over the execution of a generator function.

This ability to pause and resume execution, coupled with the ability to send data back into the generator, provides a lot of flexibility in controlling execution flow, making generators a powerful feature in JavaScript for handling complex, stateful computations or tasks.

Infinite Data Streams

Generators can be implemented to create infinite sequences or data streams. For instance, one might define a generator that generates an endless sequence of incrementing numbers as follows:

function* infiniteSequence() {
let i = 0;
while(true) {
yield i++;
}
}

This generator can be iterated indefinitely to produce an endless sequence of numbers. Interestingly, this gives us the capability to employ infinite loops without the risk of the program crashing. It could also serve as a simple method to generate unique IDs. Each time you invoke the next() method on the generator, it yields a new number, incremented from the previous one.

Simulation and Game State:

If you’re developing a game where a player can move in four directions and want to simulate all possible moves, you could use a generator to create the sequence of moves:

function* playerMoves() {
const directions = ['up', 'down', 'left', 'right'];
for(let direction of directions) {
yield direction;
}
}

You could certainly use a simple loop to iterate over the directions, but using a generator here provides some unique advantages, particularly in more complex game scenarios.

One key advantage of generators is their ability to maintain internal state across multiple calls, with the added benefit of pausing and resuming execution. This functionality is particularly useful in complex scenarios such as a chess game, where the ability to pause the game, store the state, and resume later can be invaluable. Unlike a simple loop, where additional logic would be necessary to manage this, generators inherently provide this functionality.

Consider a chess engine, where the number of potential game states is astronomically large. Instead of generating all possible game states upfront, which is not only impractical but also resource-intensive, a generator can produce them on-demand as each move is made. This leads to a more efficient management of game states, saving memory and computing power. Generators, therefore, can greatly enhance the performance and complexity of applications like a chess engine.

Dealing with Deeply Nested Data Structures

Generators can be used to process deeply nested data structures such as trees or arrays in a different manner compared to traditional recursion. While traditional recursive methods can result in a stack overflow for data structures with a high level of nesting, generators allow us to control the flow of data by yielding items one at a time. This characteristic doesn’t inherently prevent stack overflow but provides us with a unique way of handling and processing data in complex, deeply nested structures.

function* traverseTree(node) {
if (!node) {
return;
}
yield node.value;
if (node.left) {
yield* traverseTree(node.left);
}
if (node.right) {
yield* traverseTree(node.right);
}
}
const tree = {
value: 1,
left: {
value: 2,
left: { value: 4 },
right: { value: 5 },
},
right: {
value: 3,
left: { value: 6 },
right: { value: 7 },
},
};
for (const value of traverseTree(tree)) {
console.log(value); // logs: 1, 2, 4, 5, 3, 6, 7
}

The traverseTree generator function recursively traverses the nodes in the binary tree. It starts at the root, then it yields the root's value and recursively calls itself on the left child and right child if they exist. This process allows the function to handle binary trees of any level of depth.

Both generators and traditional functions in JavaScript interact with the engine’s call stack and can lead to a stack overflow error if the recursion depth exceeds the stack size limit. It’s a common misconception that generator functions prevent stack overflow. Generators are described as ‘lazy’ because they generate values only when explicitly asked for, rather than computing all values upfront. This can lead to more efficient memory usage, especially when dealing with large but finite data structures, as they generate values on demand.

However, this ‘lazy’ computation does not affect the call stack depth. In other words, even though generators can handle memory more efficiently by generating values on demand, they do not inherently reduce the depth of the call stack or prevent stack overflow. Therefore, it’s crucial to manage recursion depth carefully when using both generators and traditional recursion.

To solve the leetcode problem

var fibGenerator = function*() {
let prev1 = 0;
let prev2 = 1;

while(true) {
yield prev1;
const temp = prev1;
prev1 = prev2;
prev2 += temp;
}
};

The end :) I hope I can find a job soon.

--

--

LiveRunGrow

𓆉︎ 𝙳𝚛𝚎𝚊𝚖𝚎𝚛 🪴𝙲𝚛𝚎𝚊𝚝𝚘𝚛 👩‍💻𝚂𝚘𝚏𝚝𝚠𝚊𝚛𝚎 𝚎𝚗𝚐𝚒𝚗𝚎𝚎𝚛 ☻ I write & reflect weekly about software engineering, my life and books. Ŧ๏ɭɭ๏ฬ ๓є!