[{"content":"The maximum built-in integer unsigned long long in C is about $1.8\\ times 10 ^ {19} $, which will overflow once the number exceeds this range. The high-precision calculation uses an integer array to store each bit of the large number, and cooperates with the loop simulation vertical operation to break the limit of the number of bits.\nQuestions Background Standard shapes are often inadequate in competition and engineering, and typical scenarios include:\nCalculate $100! $ (~ 158 digits) Large power operations (e.g. RSA key generation) Financial calculations requiring precise results Core issues For two non-negative integers of arbitrary length, add, subtract, multiply, and divide four operations are implemented, and the results are accurate.\nBINDING EFFECT Up to $10 ^ 3$ digits (adjustable MAXN extension) This article only deals with non-negative integers; negative numbers require the addition of symbol bits Idea Analysis Core Ideas Save each digit of the large number * * in reverse order * * into the int' array: d [0]for one digit,d [1]` for ten digits, and so on. The advantage of reverse order is that the carry direction (low → high) is consistent with the growth direction of the array subscript, and the loop is the most natural to write.\nData Structures __ code_block_0 __\nLayout of the number 12345 in the array:\nSubscript d [0] d [1] d [2] d [3] d [4] Value 5 4 3 2 1 Key points of each operation Addition * *: add bit by bit, record the carry with the variable carry, and loop until the highest carry is also processed. Subtraction * *: Subtract bit by bit, record the borrow with borrow, and guarantee $ a\\ geq b $ before calling. Multiplication * *: double loop, the result of a [i] * b [j] is accumulated to the \u0026lsquo;i + jbit of the result, and finally the carry is processed uniformly. Intermediate results are spill-proof withlong long`. Divide by a small integer * *: starting from the highest position, maintain the remainder r, r = r * 10 + d [i] at each step, the quotient isr/b, and the remainder is updated tor % b. Code Implementation Initialization and I/O __ code_block_1 __\nAddition and Subtraction __ code_block_2 __\nMultiplication __ code_block_3 __\nDivide by small integer __ code_block_4 __\nFull Demo Program __ code_block_5 __\nApplication: Calculating Factors __ code_block_6 __\nComplexity and Advantages and Disadvantages Time complexity Set large digits to $ n $:\nOperation This article implements Optimization caps Add/Subtract $ O (n) $ — Multiplication $ O (n ^ 2) $ $ O (n\\ log n) $ (FFT) Divide by small integer $ O (n) $ — Factor $ n! $ $ O (n ^ 2\\ cdot\\ log n) $ — Spatial Complexity $ O (n) $, which is the size of the d [MAXN] array in the structure.\nPros The principle is intuitive, fully corresponds to the manual vertical, easy to understand and debug Pure C implementation without any external dependencies Addition and subtraction $ O (n) $, fully adequate for medium size ($\\ leq 10 ^ 4$ bits) disadvantages Multiplication $ O (n ^ 2) $, slower for very large numbers ($ \u0026gt; 10 ^ 5$ bits) Only 1 bit per lattice, the constant factor is too large; it can be changed to 10,000 (4 bits per lattice) to increase the speed by about 4 times Negative numbers are not supported for the time being, additional symbolic bit processing needs to be introduced Introduction to Perpetual Optimization Change base from 10 to 10000 and store 4 decimal digits per array element:\n__ code_block_7 __\nThe logic of addition, subtraction and multiplication is exactly the same, just change all % 10 to % base and /10 to /base.\n","date":"2026-03-15T22:06:00+08:00","permalink":"https://w2343419-del.github.io/WangScape/en/p/high-precision-calculations-in-c/","title":"High-precision calculations in C"},{"content":"Complete Guide to Algorithmic Complexity - Time, Space, and Asymptotic Time Complexity Complexity analysis is a core tool for measuring algorithmic efficiency and helps us anticipate performance bottlenecks before we write code. This article will systematically explain the time complexity, space complexity, and the meaning and application of the three progressive symbols $ O $, $\\ Omega $, and $\\ Theta $, and explain them with a complete example.\nWhat is Complexity When we evaluate an algorithm, we can\u0026rsquo;t just look at whether it can get the right results, but also how it performs when * * the amount of data increases * *. Complexity is a mathematical tool used to describe \u0026ldquo;the changing trend of the resources required by the algorithm as it grows with the input size $ n $\u0026rdquo;.\nTime complexity * *: how many steps does the algorithm take? Space complexity * *: How much additional memory does the algorithm take up? Both are expressed using * * progressive symbols * * - ignoring constant coefficients and focusing only on growth trends. The calculation rules are as follows:\nKeep only the highest sub-item: $3n ^ 2 + 2n + 1\\ Rightarrow O (n ^ 2) $ Ignore constant coefficient: $5n\\ Rightarrow O (n) $ Loop nested multiplication: $ n $ runs on both levels $\\ Rightarrow O (n ^ 2) $ Max sequential structure: $ O (n) + O (n ^ 2)\\ Rightarrow O (n ^ 2) $ Three Asymptotic Time Complexities The same algorithm may behave very differently under different inputs. The three asymptotic time complexity describes the behavioral boundaries of the algorithm from three angles: * * upper bound, lower bound, and tight bound * *.\nBig O symbol (upper bound, worst case) Mathematical definition * *: there are constants $ c \u0026gt; 0$ and $ n_0$, when $ n\\ geq n_0$, there is always: $ $ f (n)\\ leq c\\ cdot g (n) $ $\nThe running time of the algorithm * * up to * * is a constant multiple of $ g (n) $, which is the * * cap commitment * * of the growth rate - guarantee not to be slower than this. The most widely used in everyday development, saying \u0026ldquo;this algorithm is $ O (n ^ 2) $\u0026rdquo; usually refers to the worst-case scenario.\n__ code_block_0 __\nLarge Ω symbol (lower bound, best case) Mathematical definition * *: there are constants $ c \u0026gt; 0$ and $ n_0$, when $ n\\ geq n_0$, there is always: $ $ f (n)\\ geq c\\ cdot g (n) $ $\nThe running time of the algorithm * * at least * * is a constant multiple of $ g (n) $, which is the * * lower bound commitment * * of the growth rate - guaranteed not to be faster than this.\n__ code_block_1 __\nClassic conclusion: Any * * comparison-based sorting algorithm * *, the lower bound is $\\ Omega (n\\ log n) $, which is the mathematically provable limit and cannot be broken.\nLarge θ symbol (exact bounds, precise description) Mathematical definition * *: there are constants $ c_1, c_2 \u0026gt; 0$ and $ n_0$, when $ n\\ geq n_0$, there is always: $ $ c_1\\ cdot g (n)\\ leq f (n)\\ leq c_2\\ cdot g (n) $ $\nThe algorithm is clamped by $ g (n) $ from the * * top and bottom sides at the same time * *, which is the most accurate description. $\\ Theta $ holds if and only if $ O $ and $\\ Omega $ hold together and have the same order.\n__ code_block_2 __\nComparison of the three symbols Symbols Meaning Intuitive Memory Linear Find Example $ O (g) $ Upper bound Slowest not exceeding this speed $ O (n) $, worst traversal all $\\ Omega (g) $ Lower bound Fastest not less than this speed $\\ Omega (1) $, best found first $\\ Theta (g) $ Exact This speed Does not exist (upper and lower levels) There is no $\\ Theta $ for linear search, because it is better to be different from the worst case order, and the upper and lower bounds cannot be closed.\nTime complexity Time complexity description algorithm * * Number of steps executed * * Growth trend with input size $ n $.\nCommon Order Comparison Complexity Name Typical Scenario $ n = magnitude at 10 ^ 6$ $ O (1) $ Constant time Array access by subscript, hash table lookup 1 time $ O (\\ log n) $ Logarithmic time Binary lookup, balanced binary tree operations ~ 20 times $ O (n) $ Linear time Traversal array, linear lookup $10 ^ 6$ times $ O (n\\ log n) $ Linear logarithm Merge sort, heap sort ~ $2\\ times 10 ^ 7$ times $ O (n ^ 2) $ square time bubbling sort, select sort $10 ^ {12} $ times ⚠️ $ O (2 ^ n) $ Exponential time Violent recursive subset enumeration Not acceptable 🚫 Growth rate: $ O (1) \u0026lt; O (\\ log n) \u0026lt; O (n) \u0026lt; O (n\\ log n) \u0026lt; O (n ^ 2) \u0026lt; O (2 ^ n) $\nCode Examples __ code_block_3 __\nSpatial Complexity The spatial complexity describes the growth trend of the * * additional memory usage * * with the input scale (excluding the input data itself) when the algorithm is running.\nCommon Order Comparison Complexity Meaning Typical Scenario $ O (1) $ Fixed space Sort in place, with a few temporary variables $ O (\\ log n) $ Logarithmic space Recursive call stack (binary, fast average) $ O (n) $ Linear space Copy array, hash table, BFS queue $ O (n ^ 2) $ Square Space Create $ n\\ times n $ Matrix, Adjacency Matrix Code Examples __ code_block_4 __\nEach recursive function call allocates a stack frame on the call stack, and the * * recursive depth is the space complexity * *. Deep recursion can lead to stack overflow in extreme cases.\nIntegrated Sample Analysis: Sum of Two Numbers Complete the analysis of the three progressive symbols and spatio-temporal complexity with a classical problem.\nQuestions Description of problem Given the integer array arr and the target value target, find the subscript * * for the two numbers in the array * * and target. There is only one answer per input and the same element cannot be used twice.\nInput/Output Input: arr = [2, 7, 11, 15], target = 9 Output: [0, 1] BINDING EFFECT $2\\ leq n\\ leq 10 ^ 4$ $ -10 ^ 9\\ leq arr [i]\\ leq10 ^ 9$ Guaranteed and only one answer Idea Analysis Solution 1: Violent Enumeration Enumerate all pairs of numbers $ (i, j) $ and check if arr [i] + arr [j] = = target is met. Direct thinking, no extra space needed, but time inefficient.\nSolution 2: Hashtable optimization When an array is traversed, values that have already been seen are stored in the hash table. For each element, check if its * * complement * * (target - arr [i]) is already in the table. Returns directly if hit, otherwise table the current element.\nThis is typical * * space for time * *: $ O (n) $ extra space, reducing time from $ O (n ^ 2) $ to $ O (n) $.\nCode Implementation Solution 1: Violent Enumeration __ code_block_5 __\nSolution 2: Hashtable __ code_block_6 __\nComplexity and Advantages and Disadvantages Solution 1: Violent Enumeration Time: $ O (n ^ 2) $ (worst), $\\ Omega (1) $ (best, hit first pair), none $\\ Theta $\nSpace: $\\ Theta (1) $\nNo extra space, memory friendly\nSimple implementation, no hash function required\nPoor time efficiency, $ n = 10 ^ 4$ has 100 million operations\nNot suitable for large scale data\nSolution 2: Hashtable Time: $\\ Theta (n) $ (must be traversed once, hash lookup is $ O (1) $)\nSpace: $\\ Theta (n) $ (maximum of $ n $ elements in hash table)\nTime efficient, linear scan once\nSuitable for large scale data\nExtra $ O (n) $ memory required\nHash collisions can degenerate in extreme cases\nComparative Summary Time (Worst) Time (Best) Time ($\\ Theta $) Space Suggested Scenarios Violent Enumeration $ O (n ^ 2) $ $\\ Omega (1) $ — $ O (1) $ Extremely Limited Memory Hashtable $ O (n) $ $\\ Omega (n) $ $\\ Theta (n) $ $ O (n) $ General Business Systems Time and space trade-offs In actual engineering, time and space are often * * not optimal * * at the same time, and trade-offs need to be made according to the scene.\nSpaces for time * * (most common): hash tables, caches, dynamically planned memorized arrays. Swap time for space * *: Read large files line by line while streaming, avoiding loading all data into memory at once. Scenarios Recommendation Strategies Real-time response, high concurrency systems Sacrificing space, optimizing time Embedded Devices, Memory-Constrained Environments Sacrifice Time, Save Space General business systems Prioritize optimizing time, space is sufficient Quick Check of Common Algorithmic Complexity Algorithm Time ($O$) Time ($\\Omega$) Time ($\\Theta$) Space Array access $O(1)$ $\\Omega(1)$ $\\Theta(1)$ $O(1)$ Linear search $O(n)$ $\\Omega(1)$ — $O(1)$ Binary search $O(\\log n)$ $\\Omega(1)$ — $O(1)$ Bubble sort $O(n^2)$ $\\Omega(n)$ $\\Theta(n^2)$ $O(1)$ Merge sort $O(n \\log n)$ $\\Omega(n \\log n)$ $\\Theta(n \\log n)$ $O(n)$ Quick sort $O(n^2)$ $\\Omega(n \\log n)$ — $O(\\log n)$ Hash table lookup $O(n)$ $\\Omega(1)$ — $O(n)$ There is no $\\ Theta $ for quick sorting and linear searching, because it is better to be different from the worst case order, and the upper and lower bounds cannot be closed.\n","date":"2026-03-09T10:32:00+08:00","permalink":"https://w2343419-del.github.io/WangScape/en/p/time-and-space-complexity/","title":"Time and Space Complexity"},{"content":"In algorithmic problems, we can often see the shadow of dynamic programming, so here is a summary of dynamic programming (DP) and a very important part of it - the state transition equation.\nI. What is Dynamic Planning Dynamic Programming (DP) is an algorithmic idea that solves the original problem by decomposing it into sub-problems.\nDynamic planning is not some specific data structure, but a way of thinking.\nDP needs to meet the following two properties:\n1. Optimal Substructure The optimal solution of the original problem contains the optimal solution of the subproblem.\n2. Overlapping subquestions Subquestions are computed iteratively and can be cached to avoid duplication.\nII. What is the state transition equation? To understand the state transition equation, you should first know what a \u0026ldquo;state\u0026rdquo; is.\n1- STATE Status is a description of the problem at a certain stage, usually denoted by dp [i] or dp [i] [j].\nFor example:\ndp [i] = optimal solution for the first i elements dp [i] [j] = optimal solution from position i to position j dp [i] [w] = optimal solution considering the first i items with remaining capacity w 2. State transition equation The state transition equation can be roughly written as:\n__ code_block_0 __\nWhat the state transition equation does is determine what options are available for that step, as well as the sub-problems behind the choices (which can be understood as recursive).\nIII. Typical Examples Example 1: Linear DP - climbing stairs Questions How many ways can you climb 1 or 2 steps at a time to reach step n?\nIdea Analysis Define status: dp [i] = Number of methods to climb to level i Last step analysis: to reach step i, you can only come from step i-1 (step 1) or step i-2 (step 2) Based on the analysis, we can obtain such a state transition equation: $ $ dp [i] = dp [i-1] + dp [i-2] $ $\nNote: There are two more boundary states in this state transition equation: dp [1] = 1, dp [2] = 2\nIllustration: __ code_block_1 __\nCode Implementation __ code_block_2 __\nTime Complexity and Advantages and Disadvantages Time complexity * *: $ O (n) $ Space complexity * *: $ O (n) $ Benefits * *: The question is simple and easy to understand Status definition is intuitive Cons * *: Values can only be calculated in one fixed place and need to be recursed multiple times to calculate multiple times Example 2: Linear DP - Burglary Questions A row of houses can not rob neighbors, ask for the maximum amount. The given array nums represents the amounts for each house.\nIdea Analysis Define status: dp [i] = the maximum amount you can get for grabbing room i Last step analysis: room i, either rob or not rob Grab this room: get nums [i] + dp [i-2] (up to i-2 in front) Don\u0026rsquo;t grab this room: get dp [i-1] (grab the maximum between i-1) State transition equation: $ $ dp [i] =\\ max (dp [i-1], dp [i-2] + nums [i]) $ $\nIllustration: __ code_block_3 __\nCode Implementation __ code_block_4 __\nTime Complexity and Advantages and Disadvantages Time complexity * *: $ O (n) $ Space complexity * *: $ O (n) $, optimized for $ O (1) $ (only the first two are retained) Benefits * *: ✅ Similar to climbing stairs, clear thinking ✅ Optimizes space to O (1) Cons * *: ❌ No direct backtracking on which houses were robbed Example 3: Backpack DP - 0/1 Backpack Questions n items, weight w [], value v [], back capacity W, for maximum value.\nIdea Analysis Define status: dp [i] [j] = maximum value considering the first i items, capacity j Last step analysis: the ith item, put or not put Do not put: dp [i] [j] = dp [i-1] [j] Placement: dp [i] [j] = dp [i-1] [j-w [i]] + v [i] for $ j\\ geq w [i] $ State transition equation: $ $ dp [i] [j] =\\ max (dp [i-1] [j], dp [i-1] [j-w [i]] + v [i]) $ $\nIllustration: Items: (w = 2, v = 3), (w = 3, v = 4), (w = 4, v = 5) Back carrying capacity W = 5\n__ code_block_5 __\nCode Implementation __ code_block_6 __\nSpatial optimization * *: The two-dimensional dp can be compressed into one dimension, and the inner loop * * must be reversed * * to prevent the same item from being repeatedly placed: __ code_block_7 __\nTime Complexity and Advantages and Disadvantages Time complexity * *: $ O (nW) $ Space complexity * *: $ O (nW) $, optimized to $ O (W) $ Benefits * *: DP framework is classic and easy to expand Optimal values for all capacities can be solved at once Cons * *: When the number or capacity of items is large, the pressure in time and space is high Example 4: Sequence DP - Longest Common Subsequence (LCS) Questions Two strings for the length of the longest common subsequence. Example: \u0026quot;abcde\u0026quot; and \u0026quot;ace\u0026quot; → length is 3 (ace)\nIdea Analysis Define the status: dp [i] [j] = LCS length of the first i characters of s1 and the first j characters of s2\nLast step analysis: Are s1 [i] and s2 [j] equal:\nEquals: dp [i] [j] = dp [i-1] [j-1] + 1 unequal: dp [i] [j] = max (dp [i-1] [j], dp [i] [j-1]) (remove s1 [i] or s2 [j] to see which is better) State transition equation: \u0026lsquo;dp [i] [j] = dp [i-1] [j-1] + 1when s1 [i-1] = = s2 [j-1]; otherwisedp [i] [j] = max (dp [i-1] [j], dp [i] [j-1])`\nOr as a piecewise function:\n$ $ dp [i] [j] =\\ begin {cases} \\ text {dp [i-1] [j-1] + 1} \u0026amp;\\ text {when equal}\\ \\ text {max (dp [i-1] [j], dp [i] [j-1])} \u0026amp;\\ text {when not equal} \\ end {cases} $ $\nIllustration: s1 = \u0026ldquo;abcde\u0026rdquo; s2 = \u0026ldquo;ace\u0026rdquo;\n__ code_block_8 __\nCode Implementation __ code_block_9 __\nTime Complexity and Advantages and Disadvantages Time complexity * *: $ O (mn) $ Space complexity * *: $ O (mn) $, optimized for $ O (\\ min (m, n)) $ (scrolling array) Benefits * *: Frame for sequence alignment problems Scalable to LCS specific characters (backtracking) Cons * *: When m and n are large, the spatial pressure is large Example 5: Interval DP - Poke Balloon Questions Poke the balloon i score = nums [i-1] * nums [i] * nums [i +1] to get the maximum total score.\nIdea Analysis Key ideas * *: Do not want to \u0026ldquo;poke which first\u0026rdquo;, but \u0026ldquo;poke which last * * in the interval (i, j)\u0026rdquo;, so that the boundaries on both sides are known to avoid confusion Define status: dp [i] [j] = Maximum score for all balloons in punctured open interval (i, j)\nBased on the analysis, we can obtain such a state transition equation:\n$ $ dp [i] [j] =\\ max_{i \u0026lt; k \u0026lt; j} (dp [i] [k] + dp [k] [j] + nums [i]\\ nums [k]\\ times nums [j]) $ $\nwhere k is the last poked balloon in interval (i, j).\nCode Implementation __ code_block_10 __\nTime Complexity and Advantages and Disadvantages Time complexity * *: $ O (n ^ 3) $ Space complexity * *: $ O (n ^ 2) $ Benefits * *: Classic example of an interval DP The idea of the \u0026ldquo;last one\u0026rdquo; is very enlightening The order in which specific stamps can be obtained by backtracking Cons * *: The idea is relatively complex and confusing to the first contact Time complexity is cubic IV. DP Model Summary DP Model Comparison Summary:\nType State Transition Equation Time Complexity Space Complexity Represents Problem * * Linear DP * * Recursive Relationship $ O (n) $ $ O (n) $ Climbing Stairs, Burglary * * Backpack DP * * Choose max $ O (nW) $ $ O (nW) $ 0/1 Backpack, full backpack * * Sequence DP * * Two Pointer Recursion $ O (mn) $ $ O (mn) $ LCS, Edit Distance * * Interval DP * * Interval Split Recursion $ O (n ^ 3) $ $ O (n ^ 2) $ Balloon Poke, Matrix Chain Multiplication Summary \u0026amp; Suggestions From the question * *: Determine if DP can be used (with optimal sub-structure and overlapping sub-problems) Define status * *: clearly define what dp [...] means Write out the transfer equation * *: Determine the relationship between the states Determination of initial conditions * *: treatment of boundary situations Implementation and optimization * *: Code implementation, then consider space + time optimization ","date":"2026-03-02T13:57:00+08:00","permalink":"https://w2343419-del.github.io/WangScape/en/p/state-transition-equations-and-dynamic-programming/","title":"State Transition Equations and Dynamic Programming"},{"content":"This is a classic but difficult chessboard problem. Although it is the topic of NOIP in 2000, it is still quite difficult for first-time contacts as the final topic. This paper summarizes three different solutions: Dynamic Planning, DFS Memorization, and Minimum Cost Maximum Flow, which unfolds gradually from easy to difficult.\nQuestions 题目来源 NOIP 2000 提高组 T4\nDescription of problem There is a square diagram of N × N (N ≤ 9), some of which we fill in positive integers, while others put the number 0. Someone starts at point A (0, 0) in the top left corner of the diagram and can walk down or to the right until they reach point B (N, N) in the bottom right corner.On the way he walks, he can take the number in the square (which becomes the number 0). This person walked from point A to point B twice, trying to find 2 such paths, so that the sum of the obtained numbers is the maximum.\nInput/Output Input format * *: The first line of input is an integer N (representing the grid diagram of N × N), and each subsequent line has three integers, the first two represent the position, and the third number is the number placed on the position. A separate line of 0 indicates the end of the input. Output format * *: Just output an integer representing the maximum sum obtained on the 2 paths. Example Input: __ code_block_0 __\nOutput: __ code_block_1 __\n约束条件 数据范围：1≤N≤9 Problem analysis Why can\u0026rsquo;t I enumerate twice (i.e. find an optimal path first, then find the second one in the remaining cells)? Because one path changes the map (takes the number) and affects the result of the second, the two paths must be considered in conjunction and cannot be optimized independently.\nIdea Analysis The ideas for the three solutions are as follows:\nSolution 1: Dynamic planning Core idea * *: Advance both paths simultaneously, simulating two people walking at the same time with one DP. State design * *: set dp [k] [x1] [x2], where: k is the number of steps currently taken (i.e. the value of x + y, from 2 to 2N) x1 and x2 are the line numbers where the two people are currently located y = k - x can be derived from k and x (key optimization, this step reduces the dimension), so column numbers do not need to be stored separately Deduplication * *: When two people are in the same cell (x1 = = x2, y1 = = y2), the cell is taken only once. State transition * *: Two people each can choose to move to the right or down for a total of 4 combinations. Each state dp [k] [x1] [x2] represents the sum of the maximum values at line x1 and line x2 for human 1 and 2, respectively, going to step k. Solution 2: DFS + Memory Search Core idea * *: Similar to the DP algorithm (deep search and DP are essentially one thing after all), but adds memorized search to avoid double counting. If there is no memorized search, the amount of calculation will be an exponential explosion, about $4 ^ {16} $ times at N = 9. Implementation * *: Starting from the initial state, all possible transitions are attempted recursively, while the calculated state is cached with the memo array, avoiding duplication. Returns 0 when the end point is reached and returns the maximum value layer by layer. Solution 3: Cost flow (minimum cost maximum flow) Core Idea * *: Convert \u0026ldquo;maximize two paths\u0026rdquo; into a network flow problem. Modeling ideas * *: Two paths from A to B = 2 flows from the source to the sink Fetch up to once per cell = Capacity limit per node Get number max = cost max (min to min) Split point processing * *: Each grid (i, j) is split into two nodes in and out: in → out capacity 1, cost - map [i] [j] (first path taken) in → out plus capacity 1, cost 0 (second path through but not counted) Connecting edges * *: (i, j) out connects (i +1, j) in and (i, j +1) in, capacity 2, cost 0. Code Implementation Solution 1: Dynamic planning Full Code __ code_block_2 __\nSolution 2: DFS + Memory Search __ code_block_3 __\nSolution 3: Cost flow (minimum cost maximum flow) __ code_block_4 __\nTime Complexity and Advantages and Disadvantages Solution 1: Dynamic planning Time complexity * *: $ O (N ^ 3) $ Number of states: $ O (N ^ 2)\\ times O (N ^ 2)/2 = O (N ^ 4) $, but due to the constraints of dimensionality reduction and x1 and x2, it is actually $ O (N ^ 3) $ 4 transitions per state Space complexity * *: $ O (N ^ 3) $ dp array size is $2N\\ times N\\ times N $ Benefits * *: Clear thinking and easy to understand Problem solving in one iteration The code is relatively simple Cons * *: Large footprint (approx. 1.3MB at N = 9) Solution 2: DFS + Memory Search Time complexity * *: $ O (N ^ 3) $ Same number of statuses as DP Up to one calculation per state (memorization) Space complexity * *: $ O (N ^ 3) $ Memo and visited arrays account for $ O (N ^ 3) $ each Benefits * *: Logical and natural, thinking from top to bottom Easy to add pruning (although there are not many pruning in this question) Flexible state definition Cons * *: The recursive call stack depth is $ O (N) $ (stack space) Same space complexity as DP Solution 3: Expense Flow Time complexity * *: $ O (Flow\\ times SPFA) = O (2\\ times E\\ log V) = O (N ^ 2\\ times N ^ 2) = O (N ^ 4) $ Flow is 2 SPFA $ O (V\\ log V) $ each time, where $ V = O (N ^ 2) $, $ E = O (N ^ 2) $ Space complexity * *: $ O (V + E) = O (N ^ 2) $ Storage space for diagrams Benefits * *: Suitable for more general scenarios (multiple paths, restricted diagrams, etc.) Code frameworks can be reused for other expense stream issues Relatively sparsely occupied space Cons * *: Highest time complexity (about $ O (N ^ 4) $ vs $ O (N ^ 3) $) The code is long, complex and error-prone Difficulty has skyrocketed beyond the limits of the competition Comparison and Summary Features Solution 1 DP Solution 2 DFS Solution 3 Cost Flow Ease of Understanding ★★★★☆ ★★★★☆ ★☆☆☆☆ Difficulty ★★☆☆☆ ★★☆☆☆ ★★★★★ Time Complexity $ O (N ^ 3) $ $ O (N ^ 3) $ $ O (N ^ 4) $ Space Complexity $ O (N ^ 3) $ $ O (N ^ 3) $ $ O (N ^ 2) $ Recommendation Index ★★★★★ ★★★★☆ ★★☆☆☆ Conclusion * *: For this question, * * Solution 1 (DP) * * is the best choice, which is clear, efficient and not overly complicated. Solution 2 is for students who want to practice DFS. Solution 3 Although elegant, it is not as efficient as DP for the scale of N ≤ 9, and only extends knowledge. ","date":"2026-02-28T11:31:00+08:00","permalink":"https://w2343419-del.github.io/WangScape/en/p/p1004-noip-2000-improvement-group-grid-score-analysis-and-summary/","title":"P1004 [NOIP 2000 improvement group] grid score analysis and summary"},{"content":"After nearly a day of half-day self-doubt, half-day struggle with AI (doge) and all-day hardness ratio with head and table\u0026hellip;\nJanuary 21, 2026 finally became an extraordinary day for me.\nMy personal blog is finally live!!!\nBlog Goals I will record here:\nSummary of difficulties, gains and knowledge points encountered in programming -Learning experience and algorithmic analysis Occasional complaints and book reviews Favorite poems and other cultural content Maybe this blog will be my all-disciplinary notebook? (at least during university)\nIn addition to code, it may also involve artificial intelligence, games, music, movies and other fields. I do my best to keep my blog focused on learning.\nFinal Words Although I don\u0026rsquo;t know if I can always maintain this blog (a bit lazy after all), I will do my best.\nI hope my update frequency is not too low\u0026hellip;\n(ps. Blogs are rough now, but better later!)\nEdited February 3, 2026 * ","date":"2026-02-03T13:28:00+08:00","permalink":"https://w2343419-del.github.io/WangScape/en/p/alpine-flowing-water-searching-for-sound/","title":"Alpine flowing water, searching for sound"}]