Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

build(deps): bump axios from 0.16.2 to 0.21.2 #1753

Merged
merged 4 commits into from
Mar 7, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
152 changes: 66 additions & 86 deletions notes/BGOONZ_BLOG_2.0.wiki/Data-Structures.md
Original file line number Diff line number Diff line change
Expand Up @@ -186,19 +186,11 @@ Disadvantages

Each hash table can be very different, from the types of the keys and values, to the way their hash functions work. Due to these differences and the multi-layered aspects of a hash table, it is nearly impossible to encapsulate so generally.




## Data structure interview questions


<details>

<summary> 🔥See Interview Questions </summary>




<summary> 🔥See Interview Questions </summary>

For many developers and programmers, data structures are most important for [cracking Javascript coding interviews](https://www.educative.io/blog/acing-the-javascript-interview-top-questions-explained). Questions and problems on data structures are fundamental to modern-day coding interviews. In fact, they have a lot to say over your hireability and entry-level rate as a candidate.

Expand All @@ -225,18 +217,17 @@ There are two ways you could solve this coding problem in an interview. Let's di
#### Solution #1: Doing it "by hand"

```js
function removeEven( arr ) {
const odds = [];
for ( let number of arr ) {
if ( number % 2 != 0 )
// Check if the item in the list is NOT even ('%' is the modulus symbol!)
odds.push( number ); //If it isn't even append it to the empty list
}
return odds; // Return the new list
function removeEven(arr) {
const odds = [];
for (let number of arr) {
if (number % 2 != 0)
// Check if the item in the list is NOT even ('%' is the modulus symbol!)
odds.push(number); //If it isn't even append it to the empty list
}
return odds; // Return the new list
}
let example = removeEven( [ 3, 2, 41, 3, 34 ] );
console.log( 'EXAMPLE:', example ); //EXAMPLE: [ 3, 41, 3 ]

let example = removeEven([3, 2, 41, 3, 34]);
console.log('EXAMPLE:', example); //EXAMPLE: [ 3, 41, 3 ]
```

This approach starts with the first element of the array. If that current element is not even, it pushes this element into a new array. If it is even, it will move to the next element, repeating until it reaches the end of the array. In regards to time complexity, since the entire array has to be iterated over, this solution is in _O(n)O(n)._
Expand Down Expand Up @@ -484,39 +475,36 @@ BinarySearchTree.js
Node.js

```js

"use strict";
const Node = require( './Node.js' );
'use strict';
const Node = require('./Node.js');
module.exports = class BinarySearchTree {
constructor ( rootValue ) {
this.root = new Node( rootValue );
constructor(rootValue) {
this.root = new Node(rootValue);
}
insert( currentNode, newValue ) {
if ( currentNode === null ) {
currentNode = new Node( newValue );
} else if ( newValue < currentNode.val ) {
currentNode.leftChild = this.insert( currentNode.leftChild, newValue );
insert(currentNode, newValue) {
if (currentNode === null) {
currentNode = new Node(newValue);
} else if (newValue < currentNode.val) {
currentNode.leftChild = this.insert(currentNode.leftChild, newValue);
} else {
currentNode.rightChild = this.insert( currentNode.rightChild, newValue );
currentNode.rightChild = this.insert(currentNode.rightChild, newValue);
}
return currentNode;
}
insertBST( newValue ) {
if ( this.root == null ) {
this.root = new Node( newValue );
insertBST(newValue) {
if (this.root == null) {
this.root = new Node(newValue);
return;
}
this.insert( this.root, newValue );
this.insert(this.root, newValue);
}
preOrderPrint( currentNode ) {
if ( currentNode !== null ) {
console.log( currentNode.val );
this.preOrderPrint( currentNode.leftChild );
preOrderPrint(currentNode) {
if (currentNode !== null) {
console.log(currentNode.val);
this.preOrderPrint(currentNode.leftChild);
}
}
}


};
```

---
Expand All @@ -539,8 +527,6 @@ Output: A graph with the edge between the source and the destination removed.
removeEdge(graph, 2, 3);
```



![widget](https://www.educative.io/cdn-cgi/image/f=auto,fit=contain,w=600/api/page/6094484883374080/image/download/6038590984290304)

![widget](https://www.educative.io/cdn-cgi/image/f=auto,fit=contain,w=300,q=10/api/page/6094484883374080/image/download/6038590984290304)
Expand All @@ -556,32 +542,30 @@ LinkedList.js
Node.js

```js
const LinkedList = require( './LinkedList.js' );
const Node = require( './Node.js' );
const LinkedList = require('./LinkedList.js');
const Node = require('./Node.js');
module.exports = class Graph {
constructor( vertices ) {
this.vertices = vertices;
this.list = [];
let it;
for ( it = 0; it < vertices; it++ ) {
let temp = new LinkedList();
this.list.push( temp );
constructor(vertices) {
this.vertices = vertices;
this.list = [];
let it;
for (it = 0; it < vertices; it++) {
let temp = new LinkedList();
this.list.push(temp);
}
}
}
addEdge( source, destination ) {
if ( source < this.vertices && destination < this.vertices )
this.list[ source ].insertAtHead( destination );
return this;
}
printGraph() {
console.log( ">>Adjacency List of Directed Graph<<" );
let i;
for ( i = 0; i < this.list.length; i++ ) {
process.stdout.write( `|${String( i )}| => ` );
addEdge(source, destination) {
if (source < this.vertices && destination < this.vertices) this.list[source].insertAtHead(destination);
return this;
}
}
}

printGraph() {
console.log('>>Adjacency List of Directed Graph<<');
let i;
for (i = 0; i < this.list.length; i++) {
process.stdout.write(`|${String(i)}| => `);
}
}
};
```

---
Expand All @@ -607,25 +591,22 @@ result = [-2, 1, 5, 9, 4, 6, 7];
To solve this problem, we must min heapify all parent nodes. Take a look.

```js

function minHeapify( heap, index ) {
const left = index * 2;
const right = ( index * 2 ) + 1;
let smallest = index;
if ( ( heap.length > left ) && ( heap[ smallest ] > heap[ left ] ) ) {
smallest = left
}
if ( ( heap.length > right ) && ( heap[ smallest ] > heap[ right ] ) )
smallest = right;
if ( smallest != index ) {
const tmp = heap[ smallest ];
heap[ smallest ] = heap[ index ]
heap[ index ] = tmp
minHeapify( heap, smallest )
}
return heap;
function minHeapify(heap, index) {
const left = index * 2;
const right = index * 2 + 1;
let smallest = index;
if (heap.length > left && heap[smallest] > heap[left]) {
smallest = left;
}
if (heap.length > right && heap[smallest] > heap[right]) smallest = right;
if (smallest != index) {
const tmp = heap[smallest];
heap[smallest] = heap[index];
heap[index] = tmp;
minHeapify(heap, smallest);
}
return heap;
}

```

---
Expand All @@ -646,5 +627,4 @@ console.log(convertMax(maxHeap));

We consider `maxHeap` to be a regular array and reorder it to accurately represent a min-heap. You can see this done in the code above. The `convertMax()` function then restores the heap property on all nodes from the lowest parent node by calling the `minHeapify()` function. In regards to time complexity, this solution takes _O(nlog(n))O(nlog(n))_ time.


</details>
4 changes: 2 additions & 2 deletions notes/BGOONZ_BLOG_2.0.wiki/anatomy-of-search-engine.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,8 +26,8 @@ Computer Science Department, Stanford University, Stanford, CA 94305
### Abstract

> In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at [http://google.stanford.edu/](http://google.stanford.edu/)
> To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date.
> Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.
> To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date.
> Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.
>
> **Keywords**: World Wide Web, Search Engines, Information Retrieval, PageRank, Google

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,8 +26,8 @@ Computer Science Department, Stanford University, Stanford, CA 94305
### Abstract

> In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at [http://google.stanford.edu/](http://google.stanford.edu/)
> To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date.
> Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.
> To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date.
> Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.
>
> **Keywords**: World Wide Web, Search Engines, Information Retrieval, PageRank, Google

Expand Down
Loading