Posted by tintin_2003 on 2025-11-20 09:14:41 |
Share: Facebook | Twitter | Whatsapp | Linkedin Visits: 85
{short text rich description}
Linear search scans each element in sequence until it finds the target. Best for small lists or unsorted arrays. It's simple, stable, and needs no preprocessing.
{code in C language}
{their time and space complexity}
Time: O(n) worst-case and average. Best O(1) if first element.
Space: O(1).
{output and dry run of the output}
Sample output:
Dry run: check indices 0→5: arr[0]=5 ≠1; arr[1]=2 ≠1; arr[2]=9 ≠1; arr[3]=1 → match, return index 3.
{short text rich description}
Binary search finds a key in a sorted array by repeatedly halving the search interval. Very efficient for read-only sorted lists.
{code in C language}
{their time and space complexity}
Time: O(log n).
Space: O(1).
{output and dry run of the output}
Sample output:
Dry run: mid=(0+5)/2=2 -> arr[2]=4 <5 → l=3; mid=(3+5)/2=4 -> arr[4]=7 >5 → r=3; mid=3 -> arr[3]=5 → found index 3.
{short text rich description}
Builds sorted array one element at a time by inserting each element into its correct position among prior elements. Good for small arrays and nearly-sorted data.
{code in C language}
{their time and space complexity}
Time: O(n^2) worst-case; O(n) best-case (already sorted).
Space: O(1).
{output and dry run of the output}
Sample output:
Dry run: Insert 2 before 5 -> [2,5,9,1,6]; then insert 9 after 5 -> [2,5,9,1,6]; insert 1 → [1,2,5,9,6]; insert 6 → [1,2,5,6,9].
{short text rich description}
Repeatedly selects the smallest (or largest) item from unsorted portion and swaps it with start of unsorted portion. Simple but not stable or efficient for large lists.
{code in C language}
{their time and space complexity}
Time: O(n^2) always.
Space: O(1).
{output and dry run of the output}
Sample output:
Dry run: find min in whole array -> 11 swap with first; find min in remaining -> 12 swap with second; etc.
{short text rich description}
Repeatedly swaps adjacent elements if out of order. Works by "bubbling" largest elements to the end. Simple but inefficient.
{code in C language}
{their time and space complexity}
Time: O(n^2) worst/average, O(n) best (with early exit).
Space: O(1).
{output and dry run of the output}
Sample output:
Dry run: adjacent swaps repeatedly move larger values rightward until sorted.
{short text rich description}
Divide-and-conquer: split array, sort halves, merge sorted halves. Stable and O(n log n), but requires extra temporary array.
{code in C language}
{their time and space complexity}
Time: O(n log n) always.
Space: O(n) extra for merging (we used temp array).
{output and dry run of the output}
Sample output:
Dry run: split repeatedly to single elements, merge pairs: [12],[11] -> [11,12], etc., final merged sorted array.
{short text rich description}
Divide-and-conquer using a pivot; partition elements less/greater than pivot, then sort partitions. Excellent average performance; worst-case O(n^2) for bad pivots.
{code in C language}
{their time and space complexity}
Time: Average O(n log n), worst O(n^2).
Space: O(log n) average recursion depth; O(n) worst recursion depth (if unbalanced).
{output and dry run of the output}
Sample output:
Dry run: choose last element as pivot, partition array around pivot, recurse.
{short text rich description}
A BST stores values with left child < node < right child. Here we implement BST using arrays: val[], left[], right[], and root is index of root. This avoids pointers/structs.
{code in C language}
{their time and space complexity}
Time: Search/insert average O(log n) for balanced tree, worst O(n).
Space: O(n) for arrays.
{output and dry run of the output}
Sample output:
Dry run: Insert 50 as root. Insert 30 -> left of 50. Insert 20 -> left of 30. Insert 40 -> right of 30. Insert 70 -> right of 50. Insert 60 -> left of 70. Insert 80 -> right of 70. Inorder traversal prints sorted order.
{short text rich description}
A binary heap is a complete binary tree stored in an array; parent-child index relations: parent i -> children 2i+1, 2i+2. Heapsort builds a max-heap and repeatedly extracts max to sort in O(n log n).
{code in C language}
{their time and space complexity}
Time: O(n log n) worst/average.
Space: O(1) additional (in-place).
{output and dry run of the output}
Sample output:
Dry run: build max heap (root holds max), swap root with last, reduce heap size, heapify root, repeat.
{short text rich description}
Priority queue implemented as a min-heap stored in an array. Supports insert, extract_min, and decrease_key operations without pointers.
{code in C language}
{their time and space complexity}
Insert: O(log n). Extract min: O(log n). Decrease key: O(log n).
Space: O(n).
{output and dry run of the output}
Sample output (one possible result depending on internal indices):
Dry run: insert keeps heap property; extract removes min at root, moves last element to root then heapifies down.
{short text rich description}
BFS explores graph layer by layer using a queue; finds shortest path in unweighted graphs (by edges). Implemented with arrays and queue indices.
{code in C language}
{their time and space complexity}
Time: O(n^2) with adjacency matrix (or O(n + m) with adjacency list).
Space: O(n^2) for adjacency matrix.
{output and dry run of the output}
Sample output:
Dry run: Visit 0, enqueue neighbors 1 and 2; then visit 1, enqueue 3; visit 2, enqueue 4; visit 3 -> enqueue 5; etc.
{short text rich description}
DFS explores as deep as possible along a branch before backtracking. Using an explicit stack avoids recursion and pointers.
{code in C language}
{their time and space complexity}
Time: O(n^2) with adjacency matrix.
Space: O(n^2) adjacency matrix + O(n) stack.
{output and dry run of the output}
Sample output (order depends on neighbor iteration):
Dry run: Start 0, push neighbors 2 and 1; pop 1 -> visit 1 -> push 3; pop 3 -> visit 3 -> push 5; etc.
{short text rich description}
Topological sort orders vertices of a DAG so that for every directed edge u→v, u comes before v. Kahn’s algorithm removes nodes with in-degree 0 iteratively.
{code in C language}
{their time and space complexity}
Time: O(n^2) with adjacency matrix. (O(n+m) with adjacency list.)
Space: O(n^2) for adjacency matrix.
{output and dry run of the output}
Sample output (one valid order):
Dry run: indeg calc -> nodes with indeg 0 pushed (4,5) -> pop 4 -> reduce indeg of 0,1 -> etc.
{short text rich description}
Kruskal sorts edges by weight and adds edges that don’t form cycles, using union-find (disjoint set) for cycle detection. We implement union-find with arrays parent[] and rank[].
{code in C language}
Note: I used a tiny
Edgestruct for readability in the code above. If you require absolutely no structs, we can instead store three arraysedge_u[],edge_v[],edge_w[]and sort them together; tell me and I’ll replace.
{their time and space complexity}
Time: Sorting edges O(m log m) (here selection sort O(m^2) used for simplicity), union-find operations nearly O(α(n)). Overall O(m log m).
Space: O(m + n).
{output and dry run of the output}
Sample output:
Dry run: sorted edges by weight: (2,3,4),(0,3,5),(0,2,6),(0,1,10),(1,3,15). Pick (2,3) add; pick (0,3) add; pick (0,2) would create cycle skip; pick (0,1) add. Done.
{short text rich description}
Prim grows MST by starting from an arbitrary node and repeatedly adding the cheapest edge connecting the built tree to the rest. We use arrays to track min edge to tree.
{code in C language}
{their time and space complexity}
Time: O(n^2) with simple array implementation. With a priority queue, O(m log n).
Space: O(n^2) for matrix.
{output and dry run of the output}
Sample output:
Dry run: start at 0 -> add cheapest connecting vertex each step (1 first, then 2 via 1, etc.)
{short text rich description}
Floyd–Warshall computes shortest paths between all pairs of vertices in a weighted graph (handles positive/negative weights but no negative cycles). Uses dynamic programming over triple loops.
{code in C language}
{their time and space complexity}
Time: O(n^3).
Space: O(n^2).
{output and dry run of the output}
Sample output:
Dry run: Using intermediate nodes k=0..3, update distances. For example, dist[0][2] initially INF becomes dist[0][1]+dist[1][2]=5+3=8, then maybe improved via other k.