From 9f9bfa3646e2132c7b5309566fbe525c8bae4d60 Mon Sep 17 00:00:00 2001
From: Kirk Scheibelhut
- The number of unique error values across the entire compilation should determine the size of the error set type. - However right now it is hard coded to be a {#syntax#}u16{#endsyntax#}. See #786. + The error set type defaults to a {#syntax#}u16{#endsyntax#}, though if the maximum number of distinct + error values is provided via the --error-limit [num] command line parameter an integer type + with the minimum number of bits required to represent all of the error values will be used.
You can {#link|coerce|Type Coercion#} an error from a subset to a superset: