Wagen Beschreibung Menge Insgesamt (USD) Betrieb  Einkaufswagen Produkte : 0
Startseite > Embedded technology > Top 10 Embedded Algorithms

# Top 10 Embedded Algorithms

Updatezeit: 2022-08-10 17:35:51

## Algorithm 1: Quick sort method

Fast sort is a sorting algorithm developed by Tony Hall. It takes Ο(n log n) comparisons in the average condition to sort n items. In the worst case, Ο(n2) comparisons are required, but this is not common. Quick sort is usually significantly faster than other Ο(n log n) algorithms because its inner loop can be implemented efficiently on most architectures.

Fast sort uses the Divide and conquers strategy to divide a serial list into two sub-lists.

Algorithm steps

1. Pick one element from the series, called the "benchmark" (pivot), and

2. Reorder the series so that all elements smaller than the benchmark value are placed in front of the benchmark, and all elements larger than the benchmark value are placed behind the benchmark (the same number can go to either side). After this partition is exited, the benchmark is in the middle of the series. This is called a partition operation.

3. Recursively sorts the subseries of elements less than the base and the subseries of elements greater than the base.

The bottom case of recursive is when the series size is zero or one, i.e., always sorted. Although the recursion continues, the algorithm always exits because, in each iteration, it puts at least one element in its last position. ## Algorithm 2: Heapsort algorithm

Heapsort is a sorting algorithm that uses a data structure like a heap. A heap is a structure that approximates a complete binary tree and satisfies both the properties of a heap: i.e., the key or index of a child node is always less than (or greater than) its parent node.

The average time complexity of heap sort is Ο(nlogn).

Algorithm steps

1. Create a heap H[0..n-1].

2. Swap heap head (maximum) and heap tail

3. reduce the size of the heap by 1 and call shift_down (0) to resize the new array of top data to the corresponding position

4. repeat step 2 until the size of the heap is 1 ## Algorithm 3: Merge sort

Merge sort (translated from Taiwan) is an efficient sorting algorithm built on the merge operation. This algorithm is a very typical application of the Divide and Conquer method.

Algorithm steps

1. request a space whose size is the sum of the two sorted sequences, which is used to store the merged sequence

2. set two pointers, the initial position of which is the starting position of the two sequences already sorted

3. compare the elements pointed by the two pointers, select the relatively small element to be put into the merged space, and move the pointer to the next position

4. repeat step 3 until a pointer reaches the end of the sequence

5. Copy all the remaining elements of the other sequence directly to the end of the merged sequence ## Algorithm 4: Dichotomous lookup algorithm

A dichotomous lookup algorithm is a search algorithm to find a specific element in an ordered array. The search process starts from the middle element of the array and ends if the middle element is exactly the element to be found; if a particular element is larger or smaller than the middle element, the search is performed in the half of the array that is larger or smaller than the middle element. The comparison starts from the middle element as in the beginning. It is not found if the array is empty at a certain step. This search algorithm reduces the search area by half with each comparison. Each time the search area is reduced by half, the time complexity is Ο(log)

## Algorithm 5: BFPRT (linear ranking)

The BFPRT algorithm solves the classic problem of selecting the kth largest (kth smallest) element from a sequence of n elements. Through clever analysis, BFPRT can guarantee that the worst-case scenario remains linear in time complexity. The idea of the algorithm is similar to that of the quick sort, but, of course, the five algorithm authors have made a subtle treatment to make the algorithm achieve the time complexity of o(n) in the worst case.

Algorithm steps

1. divide the n elements into groups of n/5 (upper bound) in groups of 5.

2. Take each group's median and any sorting method, such as insertion sort. 3.

3. recursively call the selection algorithm to find the median of all the medians in the previous step, set to x. The case of an even number of medians is set to pick the small middle one.

4. Use x to partition the array, set the number of less than or equal to x to k, the number greater than x that is n - k. 5.

5. If i==k, return x. If i "k, recursively find the i-th smallest element among the elements less than x. If i "k, recursively find the i-th smallest element among the elements greater than x. /k, recursively find the i-th smallest element among the elements smaller than x; if i

Termination condition: n=1, the returned is the i-smallest element.

## Algorithm 6: DFS (depth-first search)

Depth-First-Search (DFS) is a kind of search algorithm. It traverses the nodes along the depth of the tree and searches the branches of the tree as deeply as possible. When all edges of node v have been explored, the search goes back to the starting node of the edge where node v was found. This process continues until all nodes accessible from the source node have been found. DFS is a blind search.

Depth-first search is a classical algorithm in graph theory. Using a depth-first search algorithm, the corresponding topological sorting table of the target graph can be generated, which can be used to solve many related graph theory problems, such as the maximum path problem, etc. The heap data structure is generally used to assist in implementing the DFS algorithm.

Algorithm Steps

Depth-first traversal graph algorithm steps.

1. access the vertex v.

2. traverse the graph depth-first, starting from the unvisited neighbors of v in turn; until all vertices in the graph that have paths to v are visited.

3. if there are still vertices in the graph that have not been visited, start from an unvisited vertex and traverse the graph again depth-first until all vertices in the graph have been visited.

The above description may be abstract, so let's take an example.

After visiting a starting vertex v in the graph, the DFS starts from v and visits any of its neighboring vertices w1; from w1, it visits a vertex w2 that is adjacent to w1 but has not yet been visited; then it starts from w2 and makes a similar visit, and so on until it reaches a vertex u where all neighboring vertices have been visited.

Then, step back to the vertex you just visited and see if any other neighboring vertices have not been visited. If so, visit this vertex and proceed with a similar visit as before; if not, go back one more step and search. The above process is repeated until all vertices in the connected graph have been visited.

## Algorithm 7: BFS (Breadth First Search)

The breadth-First-Search algorithm (BFS) is a graph search algorithm. Simply put, BFS starts from the root node and traverses the tree's nodes (graph) along the width of the tree (graph). The algorithm aborts if all nodes are visited, and BFS is also a blind search. The BFS algorithm is generally implemented with the aid of a queue data structure.

Algorithm steps

1. First, place the root node into the queue. 2.

2. Take the first node from the queue and check if it is the target.

If the target is found, end the search and return the result.

Otherwise, add all its direct children that have not been checked into the queue.

3. If the queue is empty, the whole graph has been checked - that is, there is no target in the graph to search for. End the search and send back "No target found." 4.

4. Repeat step 2. ## Algorithm 8: Dijkstra

Dutch computer scientist Ezra Dijkstra proposed Dijkstra's algorithm. Dijkstra's algorithm uses a breadth-first search to solve the single-source shortest path problem for non-negative weighted directed graphs, and the algorithm eventually yields the shortest path tree. The algorithm is commonly used in routing algorithms or as a submodule of other graph algorithms.

The input to the algorithm contains a weighted directed graph G and a source vertex S in G. We denote the set of all vertices in G by V . Each edge in the graph is an ordered pair of elements formed by two vertices. (u, v) means a path connected from vertex u to v. We denote the set of all edges in G by E, and the weight function w defines the weights of edges: E → [0, ∞]. Thus, w(u, v) is the non-negative weight from vertex u to vertex v. The weight of an edge can be imagined as the distance between two vertices. The path's weight between any two points is the sum of the weights of all the edges on the path. Given that there are vertices s and t in V, Dijkstra's algorithm finds the path with the lowest weight from s to t (e.g., the shortest path). This algorithm can also find the shortest path from a vertex s to any other vertex in a graph. Dijkstra's algorithm is the fastest known single-source shortest path algorithm for directed graphs without negative weights.

Algorithm steps

1. Initially, let S = {V0}, T = {remaining vertices}, and the distance values corresponding to the vertices in T

If it exists, d(V0, Vi) is the weight value on the arc.

If it does not exist, d(V0, Vi) is ∞

2. choose a vertex W from T whose distance value is the smallest and not in S, and join S

3. modify the distance value of the remaining vertices in T: if the distance value from V0 to Vi is shortened by adding W as an intermediate vertex, then modify this distance value

Repeat steps 2 and 3 above until S contains all vertices, i.e., W=Vi ## Algorithm 9: Dynamic programming algorithm

Dynamic programming is a method used in mathematics, computer science, and economics to solve complex problems by decomposing the original problem into relatively simple subproblems. Dynamic programming is often applied to problems with overlapping subproblems and optimal substructure properties, and dynamic programming methods tend to take much less time than the plain solution method.

The basic idea behind dynamic programming is very simple. To solve a given problem, we need to solve different parts of it (i.e., subproblems) and then combine the solutions of the subproblems to solve the original problem. Often, many subproblems are very similar, so dynamic programming attempts to reduce the computational effort by solving each subproblem only once: once the solution to a given subproblem has been computed, it is memorized and stored so that the next time the same subproblem solution is needed, the table can be consulted directly. This approach is particularly useful when the number of repeated subproblems grows exponentially concerning the input size.

The most classic problem in dynamic programming is the backpack problem.

Algorithm steps

1. Optimal substructure properties. A problem is said to have the optimal substructure property (i.e., it satisfies the optimality principle) if the optimal solution of the problem contains subproblems whose solutions are also optimal. The optimal substructure property provides an important clue for the dynamic programming algorithm to solve the problem.

2. Subproblem overlap property. The subproblem overlap property refers to the fact that when a recursive algorithm is used to solve a problem top-down, the subproblem generated each time is not always a new problem, and some subproblems are repeatedly computed. The dynamic programming algorithm takes advantage of this subproblem overlap property by computing each subproblem only once and then saving the results in a table so that when the computed subproblems are needed again, the results are viewed in the table, thus achieving high efficiency.

## Algorithm 10: Parsimonious Bayesian Classification Algorithm

The plain Bayesian classification algorithm is a simple probabilistic classification algorithm based on Bayes' theorem. Bayesian classification is based on probabilistic inference, which is how to accomplish inference and decision-making tasks when various conditions are uncertain, and only the probability of their occurrence is known. Probabilistic reasoning is the counterpart of deterministic reasoning. In contrast, the plain Bayesian classifier is based on the assumption of independence, i.e., that each feature of the sample is uncorrelated with the other features.

The plain Bayesian classifier relies on an accurate natural probability model and achieves very good classification results in a supervised learning sample set. In many practical applications, the plain Bayesian model uses maximum likelihood estimation for parameter estimation. In other words, the plain Bayesian model works without using Bayesian probability or any Bayesian model.

Aktie:

Nächste: CbM Applications and Sensors

Mehr