aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorJ08nY2025-08-06 13:35:38 +0200
committerJ08nY2025-08-06 13:35:38 +0200
commit8ff2846df2388984a7954584218a200904a3ed1f (patch)
treee52819e253bedaad8f26134e3e24becd5102ce54
parentbb50056bf5c2834e7814681e9812d32ffd39a030 (diff)
downloadECTester-8ff2846df2388984a7954584218a200904a3ed1f.tar.gz
ECTester-8ff2846df2388984a7954584218a200904a3ed1f.tar.zst
ECTester-8ff2846df2388984a7954584218a200904a3ed1f.zip
-rw-r--r--analysis/scalarmults/simulate.ipynb92
1 files changed, 77 insertions, 15 deletions
diff --git a/analysis/scalarmults/simulate.ipynb b/analysis/scalarmults/simulate.ipynb
index 9b450b1..cad0c2a 100644
--- a/analysis/scalarmults/simulate.ipynb
+++ b/analysis/scalarmults/simulate.ipynb
@@ -5,12 +5,24 @@
"id": "805d746e-610b-4d40-80d2-a8080a993f96",
"metadata": {},
"source": [
- "# Simulating EPA-RE using points of low-order"
+ "# Simulating EPA-RE using points of low-order\n",
+ "\n",
+ "As visible in the [`formulas`](formulas.ipynb) notebook, most addition formulas have exceptional cases.\n",
+ "We can use trigger these exceptions by supplying points of low order to the scalar multiplier, which\n",
+ "can result in the point at infinity appearing in an unexpected place in the scalar multiplication computation.\n",
+ "The behavior of an implementation with regards to these exceptional cases depends on what formulas it implements\n",
+ "and what checks it performs on the inputs/outputs/intermediates of the formulas and the whole scalar multiplication.\n",
+ "Furthermore, what multiples appear in the scalar multiplication depends on the scalar multiplier.\n",
+ "\n",
+ "This notebook explores the statistical differences in error rates of scalar multipliers computing on low-order base points\n",
+ "to reverse-engineer the scalar multipliers and countermeasures, even in the presence of scalar randomization. This\n",
+ "is possible because we do not care about the actual multiples that happen for any given scalar, but rather the statistical\n",
+ "properties, i.e.: For random scalars, how probable is an error for a point of low order, for example, 5?"
]
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 3,
"id": "b4386513-cc14-434b-a748-2863f8657452",
"metadata": {},
"outputs": [],
@@ -57,12 +69,15 @@
"id": "5b156d2a-7345-47f8-a76e-71a7d2be9d22",
"metadata": {},
"source": [
- "## Initialize"
+ "## Initialize\n",
+ "Let's first initialize some sets of multipliers and countermeasures we will be reverse-engineering. They come from the\n",
+ "[common.py](common.py) file, which you can examine for more information. We need to silence [some warnings](https://github.com/newaetech/chipwhisperer/pull/549), due to \n",
+ "ChipWhisperer, which is used by pyecsca."
]
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 4,
"id": "3463a7bd-34d8-458b-8ceb-dddf99de21dc",
"metadata": {},
"outputs": [],
@@ -79,10 +94,21 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 5,
"id": "170c11fc-86cf-4eb1-bf4e-b2e44b2d7ac5",
"metadata": {},
- "outputs": [],
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Scalar multipliers considered: 65\n",
+ "Scalar multipliers (with a single countermeasure) considered: 390\n",
+ "Error models considered: 32\n",
+ "Total configurations considered: 12480\n"
+ ]
+ }
+ ],
"source": [
"nmults = len(all_mults)\n",
"nmults_ctr = len(all_mults_with_ctr)\n",
@@ -100,7 +126,10 @@
"id": "8c5e9543-8447-4362-b9e2-c896d71f69a9",
"metadata": {},
"source": [
- "## Prepare"
+ "## Prepare\n",
+ "Next, let's setup some parameters. The curve specified below is not used for any computation, except that its order is given\n",
+ "to the multipliers. The `use_init` and `use_multiply` booleans specify whether the precomputation part and multiplication part, respectively, is used for the simulation. The `num_workers` option specifies the level of parallelization (via processes) that is employed. Make sure to set this to something reasonable. The `samples` option specifies the amount of samples {i.e. scalar multiplication per scalar multiplier) that get computed in one *chunk*, which corresponds to one run\n",
+ "of the cell. We use chunks to make it easy to compute and aggregate more samples, without needing too much memory."
]
},
{
@@ -112,13 +141,15 @@
"source": [
"category = \"secg\"\n",
"curve = \"secp256r1\"\n",
- "kind = \"precomp+necessary\"\n",
+ "params = get_params(category, curve, \"projective\")\n",
+ "bits = params.order.bit_length()\n",
+ "\n",
"use_init = True\n",
"use_multiply = True\n",
- "params = get_params(category, curve, \"projective\")\n",
+ "\n",
"num_workers = 30\n",
- "bits = params.order.bit_length()\n",
"samples = 1000\n",
+ "\n",
"selected_mults = all_mults"
]
},
@@ -136,6 +167,11 @@
" use_init: bool = True,\n",
" use_multiply: bool = True,\n",
" seed: bytes | None = None) -> MultResults:\n",
+ " \"\"\"\n",
+ " Takes a MultIdent, which specifies a scalar multiplier (with an optional countermeasure)\n",
+ " and simulates `samples` scalar multiplications, while tracking which multiples of the\n",
+ " symbolic input point get computed.\n",
+ " \"\"\"\n",
" results = []\n",
" if seed is not None:\n",
" random.seed(seed)\n",
@@ -167,7 +203,11 @@
" samples: int = 100,\n",
" use_init: bool = True,\n",
" use_multiply: bool = True,\n",
- " seed: bytes | None = None) -> MultResults:\n",
+ " seed: bytes | None = None) -> str:\n",
+ " \"\"\"\n",
+ " Like the `simulate_multiples` function above, but stores the pickled output directly\n",
+ " into a file named `fname`.\n",
+ " \"\"\"\n",
" results = []\n",
" if seed is not None:\n",
" random.seed(seed)\n",
@@ -196,6 +236,11 @@
"outputs": [],
"source": [
"def evaluate_multiples(mult: MultIdent, res: MultResults, divisors: set[int]):\n",
+ " \"\"\"\n",
+ " Takes MultIdent and MultResults and a set of divisors (base point orders `q`) and\n",
+ " evaluates them using the error model from the MultIdent. Note that the MultIdent\n",
+ " must have an error model in this case. Returns the ProbMap.\n",
+ " \"\"\"\n",
" errors = {divisor: 0 for divisor in divisors}\n",
" samples = len(res)\n",
" divisors_hash = hashlib.blake2b(str(sorted(divisors)).encode(), digest_size=8).digest()\n",
@@ -223,6 +268,11 @@
"outputs": [],
"source": [
"def evaluate_multiples_direct(mult: MultIdent, fname: str, offset: int, divisors: set[int]):\n",
+ " \"\"\"\n",
+ " Like `evaluate_multiples`, but instead reads the MultResults from a file named `fname`\n",
+ " at an `offset`. Still returns the ProbMap, which is significantly smaller and easier\n",
+ " to pickle than the MultResults.\n",
+ " \"\"\"\n",
" with open(fname, \"rb\") as f:\n",
" f.seek(offset)\n",
" _, res = pickle.load(f)\n",
@@ -251,7 +301,7 @@
"metadata": {},
"source": [
"## Run\n",
- "Run this cell as many times as you want. It will write chunks into files."
+ "Run this cell as many times as you want. It will simulate `samples` scalar multiplications for each `MultIdent` (a scalar multiplier implementation with an optional countermeasure) and store them into the chunk."
]
},
{
@@ -285,7 +335,14 @@
"id": "44120f28-ae4a-42e3-befb-ebc487d51f9e",
"metadata": {},
"source": [
- "## Process"
+ "## Process\n",
+ "The multiple chunks generated in the previous cell are not fully processed yet. They only contain a trace of the scalar multiplication computation as a graph of formula applications on symbolic multiples of the input point. In the next step,\n",
+ "we iterate over all possible error models and evaluate these graphs using the error models. Thereby, we go from e.g. 1000\n",
+ "scalar multiplication traces + error model + base point order `q` to a number from 0 to 1000 specifying the number of errors in those multiplications given the error model and base point order `q`. The orders that are used are described in the\n",
+ "[common.py](common.py) file.\n",
+ "\n",
+ "> The cell below computes the error probabilities for all chunks of multiples that are missing them. Hence you only need\n",
+ "> to run it once."
]
},
{
@@ -374,7 +431,10 @@
"id": "a1bb9459-aa2b-4a07-970e-6e981ab3f97e",
"metadata": {},
"source": [
- "## Merge"
+ "## Merge\n",
+ "\n",
+ "Finally, we have multiple chunks of error probabilities (i.e. `ProbMaps`) that we merge into a single file `merged.pickle`\n",
+ "that can be used by the [visualize](visualize.ipynb) and [distinguish](distinguish.ipynb) notebooks."
]
},
{
@@ -427,7 +487,9 @@
"id": "228922dc-67bf-481a-9f08-4084695e2059",
"metadata": {},
"source": [
- "## Misc"
+ "## Misc\n",
+ "\n",
+ "The cell below performs some profiling of the multiple evaluation."
]
},
{