aboutsummaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
authorChris Ward <chriswhward@gmail.com>2018-08-24 22:31:42 +0800
committerChris Ward <chriswhward@gmail.com>2018-11-12 21:17:09 +0800
commit4370425823617e33fa95fa47c1b6314fd64ea8b2 (patch)
treea245bc475e26cab3e7e330535935d555d4d94a61 /docs
parent09f8ff27fc576dbbd05e31471bb39c00abe90563 (diff)
downloaddexon-solidity-4370425823617e33fa95fa47c1b6314fd64ea8b2.tar
dexon-solidity-4370425823617e33fa95fa47c1b6314fd64ea8b2.tar.gz
dexon-solidity-4370425823617e33fa95fa47c1b6314fd64ea8b2.tar.bz2
dexon-solidity-4370425823617e33fa95fa47c1b6314fd64ea8b2.tar.lz
dexon-solidity-4370425823617e33fa95fa47c1b6314fd64ea8b2.tar.xz
dexon-solidity-4370425823617e33fa95fa47c1b6314fd64ea8b2.tar.zst
dexon-solidity-4370425823617e33fa95fa47c1b6314fd64ea8b2.zip
Clarify term and tidy text
Use UK spelling in heading Remove colon
Diffstat (limited to 'docs')
-rw-r--r--docs/miscellaneous.rst15
1 files changed, 7 insertions, 8 deletions
diff --git a/docs/miscellaneous.rst b/docs/miscellaneous.rst
index 8cc52c8f..165ed0fc 100644
--- a/docs/miscellaneous.rst
+++ b/docs/miscellaneous.rst
@@ -165,16 +165,16 @@ Different types have different rules for cleaning up invalid values:
.. index:: optimizer, common subexpression elimination, constant propagation
*************************
-Internals - The Optimizer
+Internals - The Optimiser
*************************
-The Solidity optimizer operates on assembly, so it can be and also is used by other languages. It splits the sequence of instructions into basic blocks at ``JUMPs`` and ``JUMPDESTs``. Inside these blocks, the instructions are analysed and every modification to the stack, to memory or storage is recorded as an expression which consists of an instruction and a list of arguments which are essentially pointers to other expressions. The main idea is now to find expressions that are always equal (on every input) and combine them into an expression class. The optimizer first tries to find each new expression in a list of already known expressions. If this does not work, the expression is simplified according to rules like ``constant + constant = sum_of_constants`` or ``X * 1 = X``. Since this is done recursively, we can also apply the latter rule if the second factor is a more complex expression where we know that it will always evaluate to one. Modifications to storage and memory locations have to erase knowledge about storage and memory locations which are not known to be different: If we first write to location x and then to location y and both are input variables, the second could overwrite the first, so we actually do not know what is stored at x after we wrote to y. On the other hand, if a simplification of the expression x - y evaluates to a non-zero constant, we know that we can keep our knowledge about what is stored at x.
+The Solidity optimiser operates on assembly so that other languages can use it. It splits the sequence of instructions into basic blocks at ``JUMPs`` and ``JUMPDESTs``. Inside these blocks, the optimiser analyses the instructions and records every modification to the stack, memory, or storage as an expression which consists of an instruction and a list of arguments which are pointers to other expressions. The optimiser uses a component called "CommonSubexpressionEliminator" that amongst other tasks, finds expressions that are always equal (on every input) and combines them into an expression class. The optimiser first tries to find each new expression in a list of already known expressions. If this does not work, it simplifies the expression according to rules like ``constant + constant = sum_of_constants`` or ``X * 1 = X``. Since this is a recursive process, we can also apply the latter rule if the second factor is a more complex expression where we know that it always evaluates to one. Modifications to storage and memory locations have to erase knowledge about storage and memory locations which are not known to be different. If we first write to location x and then to location y and both are input variables, the second could overwrite the first, so we do not know what is stored at x after we wrote to y. If simplification of the expression x - y evaluates to a non-zero constant, we know that we can keep our knowledge about what is stored at x.
-At the end of this process, we know which expressions have to be on the stack in the end and have a list of modifications to memory and storage. This information is stored together with the basic blocks and is used to link them. Furthermore, knowledge about the stack, storage and memory configuration is forwarded to the next block(s). If we know the targets of all ``JUMP`` and ``JUMPI`` instructions, we can build a complete control flow graph of the program. If there is only one target we do not know (this can happen as in principle, jump targets can be computed from inputs), we have to erase all knowledge about the input state of a block as it can be the target of the unknown ``JUMP``. If a ``JUMPI`` is found whose condition evaluates to a constant, it is transformed to an unconditional jump.
+After this process, we know which expressions have to be on the stack at the end, and have a list of modifications to memory and storage. This information is stored together with the basic blocks and is used to link them. Furthermore, knowledge about the stack, storage and memory configuration is forwarded to the next block(s). If we know the targets of all ``JUMP`` and ``JUMPI`` instructions, we can build a complete control flow graph of the program. If there is only one target we do not know (this can happen as in principle, jump targets can be computed from inputs), we have to erase all knowledge about the input state of a block as it can be the target of the unknown ``JUMP``. If the optimiser finds a ``JUMPI`` whose condition evaluates to a constant, it transforms it to an unconditional jump.
-As the last step, the code in each block is completely re-generated. A dependency graph is created from the expressions on the stack at the end of the block and every operation that is not part of this graph is essentially dropped. Now code is generated that applies the modifications to memory and storage in the order they were made in the original code (dropping modifications which were found not to be needed) and finally, generates all values that are required to be on the stack in the correct place.
+As the last step, the code in each block is re-generated. The optimiser creates a dependency graph from the expressions on the stack at the end of the block, and it drops every operation that is not part of this graph. It generates code that applies the modifications to memory and storage in the order they were made in the original code (dropping modifications which were found not to be needed). Finally, it generates all values that are required to be on the stack in the correct place.
-These steps are applied to each basic block and the newly generated code is used as replacement if it is smaller. If a basic block is split at a ``JUMPI`` and during the analysis, the condition evaluates to a constant, the ``JUMPI`` is replaced depending on the value of the constant, and thus code like
+These steps are applied to each basic block and the newly generated code is used as replacement if it is smaller. If a basic block is split at a ``JUMPI`` and during the analysis, the condition evaluates to a constant, the ``JUMPI`` is replaced depending on the value of the constant. Thus code like
::
@@ -185,15 +185,14 @@ These steps are applied to each basic block and the newly generated code is used
else
return 1;
-is simplified to code which can also be compiled from
+still simplifies to code which you can compile even though the instructions contained
+a jump in the beginning of the process:
::
data[7] = 9;
return 1;
-even though the instructions contained a jump in the beginning.
-
.. index:: source mappings
***************