sent to openpower-hdl-cores continuing with the implementation of a simulator that is taken directly - and literally - from the pseudocode and definitions of functions in the spec, we have hit an "inconvenient" definition. EXTS, section 1.3.4, page 6: EXTS(x) Result of extending x on the left with sign bits the problem is illustrated by asking the very simple question, "how many bits?" also, note: a definition for EXTS64 and EXTZ, both of which are *used* in the spec - and for which their definition may easily be deduced [however that *defeats the purpose of having a spec*...] - are both missing. EXTZ suffers from the same problem as EXTS, namely the number of bits to extend to, is missing. we have a "class" (SelectableInt) which records both the value and the *length* of the value (in number of bits), therefore determining where *from* (to *begin* zero-extending or sign-extending) is not a problem. where to *stop* extending is the problematic part. this causes us some considerable inconvenience when it comes to "deducing" where to stop sign/zero-extending. example, in cmprb: src21hi <- EXTZ((RB) 32:39 ) if L=0 then in_range <- (src22lo <= src1) & (src1 <= src22hi) so.. um... we zero-extend RB bits 32:39, to an *unknown* - and completely unspecified - length, then perform comparisons on those values, not having any information? now, yes, we know that zero compared to zero is obviously zero, therefore we could arbitrarily pick some random length, here, and "everything would work". a less-obvious example if AA then NIA <-iea EXTS(LI || 0b00) else NIA <-iea CIA + EXTS(LI || 0b00) i believe, here, that it is expected that the "deduction" of the sign-extension length should be by way of knowing the bit-length of NIA! to that end, we can probably get away with EXTS returning an object that is specifically marked, "ok, we have no idea what the actual bit-length is going to be, so let's set it to 1,000,000 and work it out when the object is actually used". actually... we can't... lbz RT,D(RA) ------- if RA = 0 then b <- 0 else b <- (RA) EA <- b + EXTS(D) because when b=0, that's an integer of undefined arbitrary length! now, if b had been assigned to 0-subscript-64 (64 bit zero) *then* we would have the information needed. so this actually turns out to be quite problematic. actually: in the lbz case if the definition had been: b <- (RA|0) EA <- b + EXTS(D) where the definition for (Reg|0) *is* defined, clearly (Notation section 1.3.2, p4) then that would actually be fine! this because it is obvious from the definition of (Reg|0) that the result is 64-bit. with that being defined as 64-bit, b would become 64-bit, and thus when EXTS(D) was added to b, the "unknown" length of EXTS(D) could be resolved. it would be *significantly* better if EXTS was retired in favour of EXTS64 and EXTS32! i appreciate that this requires a careful and comprehensive review of every single usage of EXTS. perhaps i can help there by collecting debug information either from actual execution or from the parser as to what bit-lengths EXTS (and EXTZ) interact with?
Reply on openpower-hdl-cores: http://lists.mailinglist.openpowerfoundation.org/pipermail/openpower-hdl-cores/2020-April/000022.html On Sat, Apr 4, 2020, 07:16 Luke Kenneth Casson Leighton <lkcl at lkcl.net> wrote: > continuing with the implementation of a simulator that is taken > directly - and literally - from the pseudocode and definitions of > functions in the spec, we have hit an "inconvenient" definition. > EXTS, section 1.3.4, page 6: > > EXTS(x) Result of extending x on the left with sign > bits > > the problem is illustrated by asking the very simple question, "how many > bits?" Maybe it would work to treat it as a conversion from a bitstring to a mathematical integer (like a Python3 int) where it can then be converted back to a bitstring when it is assigned to something with a size again? You don't need an infinite number of hardware bits to represent the mathematical integer, since you know the exact range the values fall in so you can calculate exactly how many bits you need, sign/zero-extending as needed when converting back to a bitstring. Jacob
(In reply to Jacob Lifshay from comment #1) > > EXTS(x) Result of extending x on the left with sign > > bits > > > > the problem is illustrated by asking the very simple question, "how many > > bits?" > > > Maybe it would work to treat it as a conversion from a bitstring to a > mathematical integer (like a Python3 int) where it can then be converted > back to a bitstring when it is assigned to something with a size again? hmm... if AA then NIA <-iea EXTS(LI || 0b00) else NIA <-iea CIA + EXTS(LI || 0b00) the args to that would be integer-values by that point (not the AST) so it would in theory be ok... > You don't need an infinite number of hardware bits to represent the > mathematical integer, since you know the exact range the values fall in so > you can calculate exactly how many bits you need, sign/zero-extending as > needed when converting back to a bitstring. if it's ok i'd like to go with "EXTS returns bitsize of 256" and see how that goes, because string-to-int comes with a performance penalty. yes so do python Longs (to use a python2-ism) however i don't believe it will be as high. keep this bug open so we have an alternative idea.
(In reply to Luke Kenneth Casson Leighton from comment #2) > (In reply to Jacob Lifshay from comment #1) > > You don't need an infinite number of hardware bits to represent the > > mathematical integer, since you know the exact range the values fall in so > > you can calculate exactly how many bits you need, sign/zero-extending as > > needed when converting back to a bitstring. > > if it's ok i'd like to go with "EXTS returns bitsize of 256" and see how > that goes, because string-to-int comes with a performance penalty. Basically all that's needed is to create a new type with the original bits and record if it's sign or zero extended, no Python strings need to be created or processed. What I called a bitstring above is basically a bitvector and the same as the underlying bits in a nmigen Signal, I used the name bitstring instead of nmigen Signal since there are people on the openpower list who aren't familiar with nmigen. > > yes so do python Longs (to use a python2-ism) however i don't believe it > will be as high. nmigen's simulator uses python longs internally to represent the value of a Signal, so it shouldn't be that much slower. > keep this bug open so we have an alternative idea.
(In reply to Jacob Lifshay from comment #3) > (In reply to Luke Kenneth Casson Leighton from comment #2) > > (In reply to Jacob Lifshay from comment #1) > > > You don't need an infinite number of hardware bits to represent the > > > mathematical integer, since you know the exact range the values fall in so > > > you can calculate exactly how many bits you need, sign/zero-extending as > > > needed when converting back to a bitstring. > > > > if it's ok i'd like to go with "EXTS returns bitsize of 256" and see how > > that goes, because string-to-int comes with a performance penalty. > > Basically all that's needed is to create a new type with the original bits > and record if it's sign or zero extended, no Python strings need to be > created or processed. oh ok - a different type, rather than an "actual python string". or, a flag (in the SelectableInt) would do as well. it's... slightly better than overloading the meaning "bitlen=256". > nmigen's simulator uses python longs internally to represent the value of a > Signal, so it shouldn't be that much slower. what i mean is... hmmm it's been a long time since i looked at the code: i believe they use an int (long) for when numbers fit into a long, and a different type - which takes far more time - when numbers go beyond 2^64. this in the actual interpreter source code. therefore, you won't know it, but if you use numbers greater than size 2^64 the interpreter slows down slightly. i could be wrong about that, it's been a while since i looked at the python interpreter source code.
hi michael, i reverted commit b5e4e847c2841189386da3509949d9206de92f8b Author: Michael Nolan <mtnolan2640@gmail.com> Date: Sun Apr 5 14:31:48 2020 -0400 Implement bug 278, comment 1 - better version of EXTS the reason is that unfortunately, as it wasn't a single-purpose commit, it's also reverted this: diff --git a/src/soc/decoder/isa/test_caller.py b/src/soc/decoder/isa/test_caller.py index aa6f23a..12db984 100644 --- a/src/soc/decoder/isa/test_caller.py +++ b/src/soc/decoder/isa/test_caller.py @@ -9,7 +9,7 @@ from soc.simulator.program import Program from soc.decoder.isa.caller import ISACaller, inject from soc.decoder.selectable_int import SelectableInt from soc.decoder.orderedset import OrderedSet -from soc.decoder.isa import ISA +from soc.decoder.isa.all import ISA i'll leave that with you to fix? the reason i reverted the use of exts() is because exts() returns integers. when it comes to SV, we will *need* things to be in classes such as SelectableInt, because it will be there that the bitwidths can be over-ridden (to implement the polymorphic behaviour of SV) ints cannot have their behaviour over-ridden. EXTS and the other functions will also at some point need to go into a class, so that their behaviour can also be over-ridden. however they can be dropped into the namespace as-is - appearing to be global non-class-based functions - using the @inject decorator.