Bug 178 - first coriolis2 tutorial, workflow and "test project" page
Summary: first coriolis2 tutorial, workflow and "test project" page
Status: RESOLVED FIXED
Alias: None
Product: Libre-SOC's first SoC
Classification: Unclassified
Component: Hardware Layout (show other bugs)
Version: unspecified
Hardware: PC Linux
: --- enhancement
Assignee: Jock Tanner
URL:
Depends on: 217
Blocks: 138
  Show dependency treegraph
 
Reported: 2020-02-11 16:41 GMT by Luke Kenneth Casson Leighton
Modified: 2022-09-01 20:13 BST (History)
7 users (show)

See Also:
NLnet milestone: NLNet.2019.02.029.Coriolis2
total budget (EUR) for completion of task and all subtasks: 3000
budget (EUR) for this task, excluding subtasks' budget: 3000
parent task for budget allocation: 138
child tasks for budget allocation:
The table of payments (in EUR) for this task; TOML format:
tobias = { amount = 200, paid = 2020-12-21 } lip6_donated = { amount = 950, submitted = 2022-08-26, paid = 2022-08-31 } staf = { amount = 150, paid = 2021-04-23 } lkcl = { amount = 1200, paid = 2020-03-14 } cole = { amount = 500, paid = 2020-12-20 }


Attachments
modified version of alu_hier.py to output IL file with a module other than "top" (1.58 KB, text/x-python)
2020-02-12 13:47 GMT, Luke Kenneth Casson Leighton
Details
auto-generated output from running alu_hier.py (2.97 KB, text/plain)
2020-02-12 13:48 GMT, Luke Kenneth Casson Leighton
Details
test_part_add for coriolis2 test (13.78 KB, text/plain)
2020-02-20 20:26 GMT, Luke Kenneth Casson Leighton
Details
script output (44.57 KB, application/x-bzip)
2020-02-21 19:29 GMT, Luke Kenneth Casson Leighton
Details
snx core block layout (4.73 KB, text/x-python)
2020-02-25 14:58 GMT, Luke Kenneth Casson Leighton
Details
cgt screenshot (38.37 KB, image/png)
2020-02-27 19:02 GMT, Luke Kenneth Casson Leighton
Details
screenshot of fpmul64 (101.96 KB, image/png)
2020-03-02 16:19 GMT, Luke Kenneth Casson Leighton
Details
patch to alliance-check-toolkit ALU16 (3.00 KB, patch)
2020-03-04 14:06 GMT, Luke Kenneth Casson Leighton
Details | Diff
Coriolis build failed on Debian 10 (2.70 KB, text/plain)
2020-03-06 02:07 GMT, Jock Tanner
Details
Where is the chicken? (15.41 KB, image/png)
2020-03-06 04:01 GMT, Jock Tanner
Details
This is how my “empty dreal window” looks like (48.64 KB, image/png)
2020-03-06 16:55 GMT, Jock Tanner
Details
Coriolis2 ioring.py with explicit pad positioning (1.44 KB, text/x-python)
2020-03-12 15:55 GMT, Jean-Paul Chaput
Details

Note You need to log in before you can comment on or make changes to this bug.
Description Luke Kenneth Casson Leighton 2020-02-11 16:41:45 GMT
we need the alliance2/coriolis workflow documented, a suite of tutorials
found (or written), and a "test project" to be done which gives a guide
to completion time of layout.

https://libre-riscv.org/HDL_workflow/coriolis2/
https://git.libre-riscv.org/?p=soclayout.git;a=summary
Comment 1 Luke Kenneth Casson Leighton 2020-02-11 17:04:34 GMT
ok tobias i've created a soclayout repo:
git clone  gitolite3@libre-riscv.org:soclayout.git

can i suggest copying the alliance-check-toolkit/benchs/6502/cmos directory, it looks dead simple?

we don't however want to use verilog, we want ilang, so the synthesis-yosys.mk
file will need changing.

i just managed to verify that the following (manually-run) yosys
commands will work:

set liberty_file /home/chroot/coriolis/home/lkcl/alliance/install/cells/sxlib/sxlib.lib
read_ilang part_sig_add.il
hierarchy -check -top top
synth -top top
dfflibmap -liberty $liberty_file
abc -liberty $liberty_file
clean
write_blif test.blif

and ta-daaa, it produced a blif file!

if you can modify the mk/synthesis-yosys.mk file that comes with the 6502/cmos
to take into account we are using ilang, that would make a great start.

we can then pick a simple module as a starting point and go from there.

later we can do another one with, say, the ARM chip, which has actual
GPIO pads.
Comment 2 Cole Poirier 2020-02-12 00:30:40 GMT
Hi Libre-SOC team,

Apologies for not getting back to you about my progress with the Coriolis HDL workflow sooner. I was able to follow the debian 9 installation instructions (https://www.debian.org/releases/stretch/amd64/apds03.html.en), however, after completing this and returning to the workflow page (https://libre-riscv.org/HDL_workflow/coriolis2/) I was confused by the first instruction of "In advance, edit /etc/fstab and add mount points: personally I prefer using mount --bind points” which is followed by a pastable code snippet. I got stuck here and my brain sort of got locked into a loop trying to figure out if this instruction was supposed to be completed on the initial host system and then the user is supposed to follow the debian installation instructions, or… my brain loop just kept collapsing in on itself. I’m sorry my *nix filesystem and boot procedure knowledge is too basic for me to parse this on my own. I’m confused as to what parts of the debian installation guide I should follow, and at what point in the setup procedure I should follow luke's instructions.
Comment 3 Luke Kenneth Casson Leighton 2020-02-12 00:46:58 GMT
hi cole ok great comments, i notice you edited the wiki page.  i pointed out there, if you use mount bind commands they are lost after a reboot .

i also edited the page to make it clear that you don't follow tthe debian instructions...,*and then* follow these ones.

if you have schroot running, try the alliance install, and the convenience modifications including installing ccache if you want to rebuild more than once, this will save some time.

apt-get install ccache

can you add that to the wiki page?
Comment 4 Luke Kenneth Casson Leighton 2020-02-12 00:51:31 GMT
also, chroots never require "booting". chroot simply sets a new "root" point in the filesystem, for a command.

do "man chroot" for more info.
Comment 5 Luke Kenneth Casson Leighton 2020-02-12 13:36:15 GMT
(copy of jp's reply on mailing list so it is not lost)

On Tue, 2020-02-11 at 17:04 +0000, bugzilla-daemon@libre-riscv.org wrote:
> http://bugs.libre-riscv.org/show_bug.cgi?id=178
>
> --- Comment #1 from Luke Kenneth Casson Leighton <lkcl@lkcl.net> ---
> ok tobias i've created a soclayout repo:
> git clone  gitolite3@libre-riscv.org:soclayout.git
>
> can i suggest copying the alliance-check-toolkit/benchs/6502/cmos directory, it
> looks dead simple?

  No problem. This is all free (GPLed).

> we don't however want to use verilog, we want ilang, so the synthesis-yosys.mk
> file will need changing.

  Can you supply me with a ".il" or a deterministic way to produce one
  and I will integrate it to alliance-check-toolkit.

  My vision of alliance-check-toolkit is to gather all kinds of designs
  in it to server as regression tests / benchmarks / examples.

> i just managed to verify that the following (manually-run) yosys
> commands will work:
>
> set liberty_file
> /home/chroot/coriolis/home/lkcl/alliance/install/cells/sxlib/sxlib.lib
> read_ilang part_sig_add.il
> hierarchy -check -top top
> synth -top top
> dfflibmap -liberty $liberty_file
> abc -liberty $liberty_file
> clean
> write_blif test.blif
>
> and ta-daaa, it produced a blif file!
>
> if you can modify the mk/synthesis-yosys.mk file that comes with the 6502/cmos
> to take into account we are using ilang, that would make a great start.

  Yes, see above.

> we can then pick a simple module as a starting point and go from there.
>
> later we can do another one with, say, the ARM chip, which has actual
> GPIO pads.

  I've also remembered that Coriolis is almost completely configured to
  use MOSIS scn6m_deep "real" technology which is a 180nm one.
  It can be used to check the whole toolchain down to real layout.
  Thanks to Pr. Shimizu who did make the RDS file.
  You can then see your design in GDS under Magic.
Comment 6 Luke Kenneth Casson Leighton 2020-02-12 13:44:09 GMT
ilang.convert function:

    def convert(elaboratable, name="top"

so yes you *can* convert an ilang file and give its top-level a
different name, which will allow the synthesis-yosys.mk file to
remain pretty much the same.

what i suggest is take this example:
https://github.com/m-labs/nmigen/blob/master/examples/basic/alu_hier.py

hmm no wait, i will need to modify it for you so that it outputs a
name other than "top".
Comment 7 Luke Kenneth Casson Leighton 2020-02-12 13:47:15 GMT
Created attachment 21 [details]
modified version of alu_hier.py to output IL file with a module other than "top"

ok jp, this you can just run "python3 alu_hier.py" and it will output to the cwd a file named "alu_hier.il".

hmmm... do you actually _want_ dependence on nmigen in alliance/coriolis2 for this example?  will anyone other than nmigen use ILANG?  i don't know of any other project that uses ILANG as an HDL input into yosys, everybody uses verilog or vhdl, still it might not give the right impression...

i will add a new attachment of alu_hier.il for you as well, you can decide.
Comment 8 Luke Kenneth Casson Leighton 2020-02-12 13:48:14 GMT
Created attachment 22 [details]
auto-generated output from running alu_hier.py

ok this is the output from running python3 alu_hier.py
Comment 9 Tobias Platen 2020-02-12 14:47:34 GMT
I cannot clone the repository, it seems not to exist:

git clone  gitolite3@libre-riscv.org:soclayout.git
ssh: connect to host libre-riscv.org port 22: Connection refused
Comment 10 Luke Kenneth Casson Leighton 2020-02-12 14:56:18 GMT
(In reply to Tobias Platen from comment #9)
> I cannot clone the repository, it seems not to exist:
> 
> git clone  gitolite3@libre-riscv.org:soclayout.git
> ssh: connect to host libre-riscv.org port 22: Connection refused

port 922.  sorry i have "Port 922" in a ~/.ssh/config file.
see https://libre-riscv.org/HDL_workflow/ it shows how to
do that or how to specify a git clone with port 922.
Comment 11 Tobias Platen 2020-02-12 15:12:09 GMT
I've now cloned the repo and saw the first empty commit.
Comment 12 Luke Kenneth Casson Leighton 2020-02-12 15:29:17 GMT
(In reply to Tobias Platen from comment #11)
> I've now cloned the repo and saw the first empty commit.

fantastic, ok (btw cole, you could follow this as well!  we will
update the docs accordingly)

so the next step is just to literally copy the contents of
alliance-check-toolkit/benchs/6502/cmos, minus the m65s.v file
and minus the deprecated.coriolis2 subdirectory, then, hmmm...
how about this?

create a file "example_test.py" with this in it.. i'm hacking this
together from two sources so it's untested:


from nmigen.cli import rtlil
from ieee754.part.test.test_partsig improt TestAddMod

def test(self):
    width = 16
    part_mask = Signal(4)  # divide into 4-bits
    module = TestAddMod(width, part_mask)
    sim = create_ilang(module,
                               [part_mask,
                                module.a.sig,
                                module.b.sig,
                                module.add_output,
                                module.eq_output],
                               "part_sig_add")


def create_ilang(dut, ports, test_name):
    vl = rtlil.convert(dut, name=test_name, ports=ports)
    with open("%s.il" % test_name, "w") as f:
        f.write(vl)

if __name__ == "__main__":
    test()


and add that to the (new) Makefile with a dependency "part_sig_add.il"?

global-search-replace m65s with part_sig_add...

modify the "read_verilog" command to replace it with "read_ilang"...

what do you think, tobias?
Comment 13 Tobias Platen 2020-02-14 14:19:09 GMT
The python script now runs but yosys still complains
ERROR: No such command: set (type 'help' for a command overview

and if I don't use the set command, I get the following error

yosys> read_ilang part_sig_add.il
1. Executing ILANG frontend.
Input filename: part_sig_add.il

yosys> hierarchy -check -top top

2. Executing HIERARCHY pass (managing design hierarchy).
ERROR: Module `top' not found!

Which version of yosys did you use, luke.
Comment 14 Luke Kenneth Casson Leighton 2020-02-14 14:24:08 GMT
please do make sure you commit the code, i can't take a look at it
otherwise.  i just did a "git pull" and there's no commits.

i *should* have the latest yosys from git.... although it looks like...
lkcl@fizzy:~/src/libreriscv/soclayout$ yosys -V
Yosys 0.8+615 (git sha1 6538671c, clang 7.0.0-svn342187-1~exp1~20180919215158.32 -fPIC -Os)


oh wait, of course the one in the coriolis2 chroot is from debian/stretch!

(coriolis2)lkcl@fizzy:~/alliance/build$ yosys -V       
Yosys 0.7 (git sha1 61f6811, gcc 6.3.0-18+deb9u1 -O2 -fdebug-prefix-map=/build/yosys-XOsRIM/yosys-0.7=. -fstack-protector-strong -fPIC -Os)

we may need to modify the instructions to build yosys from source
(latest git) and also pull in nmigen etc. etc.

hmmm, that's going to be fun
Comment 15 Tobias Platen 2020-02-14 15:53:28 GMT
getting weird errors, it seems that the cell libraries used are invalid.

5.1.1. Executing ABC.
ERROR: Can't open ABC output file `/tmp/yosys-abc-1BUUB2/output.blif'.
test_name: top
Comment 16 Tobias Platen 2020-02-14 16:38:37 GMT
I was using an old version of the cell libraries. 
I solved the problem using the new ones from alliance-check-toolkit.
Now python3 part_sig_add.py will produce valid output.
Comment 17 Luke Kenneth Casson Leighton 2020-02-14 17:26:39 GMT
(In reply to Tobias Platen from comment #16)
> I was using an old version of the cell libraries. 
> I solved the problem using the new ones from alliance-check-toolkit.
> Now python3 part_sig_add.py will produce valid output.

ahh briiilliant, that's really good news!  can you quickly make sure that
the wiki page has what is needed? https://libre-riscv.org/HDL_workflow/coriolis2/

oh, can you add the Makefile as well to the repo?  i see coriolis2/katana.py,
settings.py and __init__.py, no Makefile yet?

also... oh, although i personally like the way you did it as a python program,
(run_yosys() in examples/part_sig_add.py) let's keep that in the Makefile
format #include-style, because we'll be using it quite a lot?

unix rule, "one command does one thing and does it well".

if you don't beat me to it (i have a phone call to make, to alain),
i will experiment, here, in a bit, with setting the module name to
something other than "top" in create_ilang, it _should_ work when
setting an alternative, at which point we can take that copy of
mk/synthesis-yosys.mk and really pretty much cut/paste it exactly,
simply replacing "read_verilog" with "read_ilang".
Comment 18 Luke Kenneth Casson Leighton 2020-02-14 18:15:18 GMT
excellent, yes, global/search/replace "top" as the modulename
in examples/part_sig_add.py and it works fine.
Comment 19 Luke Kenneth Casson Leighton 2020-02-14 21:02:50 GMT
i added mk/synthesis-yosys.mk from the 6502/cmos bench, substituting
"verilog" with "ilang".  shoooould be good to go?
Comment 20 Tobias Platen 2020-02-15 13:39:16 GMT
I've had a look mk/synthesis-yosys.mk, fixed some errors, and added a working Makefile. Next step will be corilos
Comment 21 Luke Kenneth Casson Leighton 2020-02-15 14:11:05 GMT
(In reply to Tobias Platen from comment #20)
> I've had a look mk/synthesis-yosys.mk, fixed some errors, and added a
> working Makefile. Next step will be corilos

excellent.

yes, we literally want exactly what is used in coriolis2 cmos Makefile,
with as few deviations from that as possible.

so, no hard-coded macros (LIBERTY_FILE=/path/to/alliance-check-toolkit/cells/nsxlib/nsxlib.lib)

because all these hard-coded macros go into user.d/{yourloginid}-user.mk
and they are pulled in via mk/design-flow.mk

obviously, substitute s/m65s/part_sig_add

the way that it looks like the coriolis2 makefile system works is,
it extracts the "top" name from the make "things".

so we do not want "top" hard-coded into the Makefile, either, or
YOSYS_TOP=/tmp/yosys_top, although i appreciate that it gets things
working.
Comment 22 Luke Kenneth Casson Leighton 2020-02-15 14:13:02 GMT
in cmos/Makefile:

      NETLISTS = m65s\
                 dependent_module \
                 another_dependent_module


hmmm these will basically need to be "grep module part_sig_add.il"
and manually created, for now.

the next phase - not right now - i'd really like to see those created
from an automated command that grabs them directly from (any, given)
.il file

let's get "make lvx" working first though.
Comment 23 Luke Kenneth Casson Leighton 2020-02-16 11:20:54 GMT

On Sunday, February 16, 2020, Jean-Paul Chaput <Jean-Paul.Chaput@lip6.fr> wrote:

> yep done, added already, and replied.

  Caramba ! Encore raté !
     -- Tintin, L'Oreille Cassée.

:)
 

  My apologies here, the LIP6 spam filter is still tagging some,
  but not all, of your messages as spam,

it is utterly bizarre.  are you allowed to ask admins for whitelisting?


 
> > The Makefile system was the quickest way to stitch together the
> > design flow. In the long run, what I would try is to wrap each
> > external (non-Coriolis2 tool) in a Python wrapper, so making a
> > design will be one big Python script.
> 
> 
> ok interesting. ( i quite like Makefiles, because of their ability to
> handle file dependencies)

  I was having the feeling that the whole Makefile system was reaching
  an "obfuscation limit" and would deter people using it.

things like \$$macroname, these are where i personally would draw a line.

having python tools that are called *by* Makefiles because make works out that to generate file X.ext from X.ext2, doing that job in python, is a nuisance.

when it goes recursive, it gets even more hairy in python (bear in mind we need to do a hierarchical layout, not a "full automated and pray" one)

so for example, we need to generate the netlist not from a hardcoded list that goes into the top level Makefile, we need a *program* that generates that information, based on what comes out of the *nmigen* conversion to ilang.


  The idea of Python wrapper is to be able to manage some kind of
  "meta-information" across the whole design flow. And, hopefully,
  reduce the obfuscation of the Makefile by using object oriented
  structuration.
    And lastly (but very long term) to seamlessly switch from a
  wrapped old tool to a shiny new one directly implemented in
  Coriolis...

:)
Comment 24 Luke Kenneth Casson Leighton 2020-02-16 18:17:07 GMT

On Sunday, February 16, 2020, Jean-Paul Chaput <Jean-Paul.Chaput@lip6.fr> wrote:

  I can, I'm one of them... But not the one in charge of the mail.
  That one has just gone into vacation until the end of the month.
  I will ask him when he returns.
    Your mail is tagged as spam by spamassasin because of:

      URIBL_DBL_SPAM     Contains a spam URL listed in the Spamhaus DBL

lauri pointed me at it.  what the hell the domain's doing in that i don't know.  it's been removed.
 
> having python tools that are called *by* Makefiles because make works out that
> to generate file X.ext from X.ext2, doing that job in python, is a nuisance.

  There is a tradeoff here. If your tools are separated and communicate
  through files, Makefile is the best tool.

that's what i advocate.  *make* separate tools, follow the unix philosophy, "one thing and do it well".
 
    In the other hand, the paradigm of Coriolis is to integrate the
  tools so that they communicate efficiently through a common C++
  data structure, becoming a kind of a big binary blob. This is
  the acknowledged trend among this kind of tools.

for things that *require* a common c++ data structure  absoutely fantastic.

however for e.g nmigen and yosys, as external tools...


    Anyway, I try to make Coriolis "agnostic" on the way it is used,
  that is not try to enforce any specific way to use it.


> when it goes recursive, it gets even more hairy in python (bear in mind we need
> to do a hierarchical layout, not a "full automated and pray" one)
> 
> so for example, we need to generate the netlist not from a hardcoded list that
> goes into the top level Makefile, we need a *program* that generates that
> information, based on what comes out of the *nmigen* conversion to ilang.

  Sound like a more like a Python program to me than a Makefile.

eexactly.  which is called *by* the Makefile because the Makefile knows when to call it, in order to generate the output from dependencies.

otherwise, the python program has to start doing grepping around the filesystem, looking for partially completed dependency output, and pretty soon you have just duplicated GNU make... in python.
Comment 25 Luke Kenneth Casson Leighton 2020-02-19 22:05:42 GMT
hi jean-paul, i began compiling part_sig_add.py, ran into
"PortMap::_lookup() unconnected" and tried using alu_hier.py instead
(it's simpler), same issue

[ERROR] PortMap::_lookup() Unconnected <<id:3927 Plug UNCONNECTED ck_htree.i>>.
[ERROR] PortMap::_lookup() Unconnected <<id:3926 Plug UNCONNECTED ck_htree.q>>.
       + sub (netlist,layout).
       + add (netlist,layout).
[ERROR] PortMap::_lookup() Unconnected <<id:3927 Plug UNCONNECTED ck_htree.i>>.
[ERROR] PortMap::_lookup() Unconnected <<id:3926 Plug UNCONNECTED ck_htree.q>>.
mk/pr-coriolis.mk:83: recipe for target 'alu_hier_cts_r.vst' failed
make: [alu_hier_cts_r.vst] Error 1 (ignored)
MBK_OUT_LO=al; export MBK_OUT_LO; MBK_SEPAR='_'; export MBK_SEPAR; /home/lkcl/alliance/install/bin/cougar       -c -f alu_hier_cts_r alu_hier_cts_r_ext

looking through the errors:

[ERROR] CParsVst() VHDL Parser - File:<./alu_hier.vst> Line:254
        Port map assignment discrepency instance:0 vs. model:1
        Python stack trace:
        #0 in                  <module>() at /home/lkcl/alliance-check-toolkit/bin/doChip.py:320

Traceback (most recent call last):
  File "/home/lkcl/alliance-check-toolkit/bin/doChip.py", line 335, in <module>
    sys.exit( shellSuccess )
NameError: name 'shellSuccess' is not defined
mk/pr-coriolis.mk:83: recipe for target 'alu_hier_cts_r.vst' failed
make: [alu_hier_cts_r.vst] Error 1 (ignored)
MBK_OUT_LO=al; export MBK_OUT_LO; MBK_SEPAR='_'; export MBK_SEPAR; /home/lkcl/alliance/install/bin/cougar       -c -f alu_hier_cts_r alu_hier_cts_r_ext


and then at alu_hier.vst:

  ck_htree : buf_x2
  port map ( i   => UNCONNECTED
           , q   => UNCONNECTED
           , vdd => vdd
           , vss => vss
           );

so how do we fix that?  create a clock as part of the output, somehow?
Comment 26 Luke Kenneth Casson Leighton 2020-02-19 22:16:18 GMT
ok worked out that you have to have signals named "m_clock" and "p_reset"

now we have...


     [035] Bipart. HPWL:       4381 RMST:       4717
           Linear. HPWL:       2155 RMST:       2388
           Orient. HPWL:       2147 RMST:       2376
     [036] Bipart. HPWL:       4346 RMST:       4683
           Linear. HPWL:       2191 RMST:       2429
  o  Detailed Placement.
     [000] Oriented ....... HPWL:       3996 RMST:       4361

[ERROR] Didn't manage to pack a cell: leave more whitespace and avoid macros nea
r the right side
        
        Python stack trace:
        #0 in                ScriptMain() at .../dist-packages/cumulus/plugins/C
lockTreePlugin.py:108

  o  Recursive Save-Cell.
     + alu_hier (netlist,layout).
Comment 27 Luke Kenneth Casson Leighton 2020-02-19 23:00:23 GMT
(cut/pasting reply into bugtracker)

On Wed, 2020-02-19 at 22:16 +0000, bugzilla-daemon@libre-riscv.org wrote:
> http://bugs.libre-riscv.org/show_bug.cgi?id=178
>
> --- Comment #26 from Luke Kenneth Casson Leighton <lkcl@lkcl.net> ---
> ok worked out that you have to have signals named "m_clock" and "p_reset"
>
> now we have...
>
>
>      [035] Bipart. HPWL:       4381 RMST:       4717
>            Linear. HPWL:       2155 RMST:       2388
>            Orient. HPWL:       2147 RMST:       2376
>      [036] Bipart. HPWL:       4346 RMST:       4683
>            Linear. HPWL:       2191 RMST:       2429
>   o  Detailed Placement.
>      [000] Oriented ....... HPWL:       3996 RMST:       4361
>
> [ERROR] Didn't manage to pack a cell: leave more whitespace and avoid macros
> near the right side

  The placer algorithm needs a certain amount of free space to operate.
  On big design 5% of free space is enough to ensure that because that's
  still some space. But on small design like this example this is not
  enough, you have to increase to 7% or 10%.
    This is done in the configuration file "./coriolis2/settings.py",
  look for:

    Cfg.getParamPercentage( 'etesian.spaceMargin' ).setPercentage( 5.0 )

    The name of the clock signal be changed, it doesn't need to be called
  "m_clock".

    af  = CRL.AllianceFramework.get()
    env = af.getEnvironment()
    env.setCLOCK( '^ck$|m_clock' )

    For "p_reset", that's strange.

    How can I get the design to check it ?

    I'm working on directly integrating nMigen in alliance-check-toolkit.
    I did get the latest nMigen but, on my Debian 9 chroot, it does not
  work because I guess it needs at least Python 3.6 (only 3.5 on Debian).
  But works on CenOS 7 ;-).
Comment 28 Luke Kenneth Casson Leighton 2020-02-19 23:11:11 GMT
ah haaaaa!  the design was so small i had to set etesian to 50%

also the trick of the clock worked.

yes you need to install python3.6 however in debian/9 that actually
is a "dummy" operation that fails.  so i followed some instructions
which adds (pins) debian/testing, added python 3.7 aaand of course
it removed libboost 1.62 sigh.

what i suggest therefore for now is to run "python3.7 examples/alu.py"
from *OUTSIDE* of the chroot, copy the alu.il file *into* the chroot
and *then* run make lvx.
Comment 29 Jacob Lifshay 2020-02-19 23:38:19 GMT
(In reply to Luke Kenneth Casson Leighton from comment #28)
> yes you need to install python3.6 however in debian/9 that actually
> is a "dummy" operation that fails.  so i followed some instructions
> which adds (pins) debian/testing, added python 3.7 aaand of course
> it removed libboost 1.62 sigh.
> 
> what i suggest therefore for now is to run "python3.7 examples/alu.py"
> from *OUTSIDE* of the chroot, copy the alu.il file *into* the chroot
> and *then* run make lvx.

Maybe you could use Docker to build it since it makes it nearly trivial to reproduce, and you can copy between different containers as part of the build process, allowing you to use different base images for the different parts -- Debian 9 for part and a different one with Python 3.8 or similar for the other part.

see https://docs.docker.com/develop/develop-images/multistage-build/
Comment 30 Luke Kenneth Casson Leighton 2020-02-19 23:48:14 GMT
i may just see if i can build python 3.6/3.7 from source or find a
prebuild somewhere.

i went through the process with pypy installation from source
so know what to expect, have to manually grab pip3 and a couple
other things.
Comment 31 Luke Kenneth Casson Leighton 2020-02-20 10:51:40 GMT
managed to get python 3.7 and boost 1.67 installed with apt-get -t testing install and rebuilt coriolis2 and alliance with those.  did not select py3 for the build but it is "there" and thus available for nmigen.
Comment 32 Luke Kenneth Casson Leighton 2020-02-20 18:03:46 GMT
hmm, jean-paul, one of the things that nmigen does is, it uses "$1", "$2"
etc. at the end of names, in order to make them globally unique, where there
is a name-clash.

i'm noticing that this isn't being transferred through properly.
make of course uses "$$" to indicate "actual $" which i tried: it
seems to work except that mux2$19 gets turned into "mux27783529_r.ap"
and despite that weirdness it works fine right up until you do
"make clean".

the files that *actually* get created are mux2_19.ap.

the files that get *read* are mux2_19.ap.

the files that are *deleted* are mux27783529_r.ap

yes it would be wonderful if the $ symbol had not been outputted
into the ILANG files...
Comment 33 Luke Kenneth Casson Leighton 2020-02-20 18:14:04 GMT
another error, jean-paul:

EtesianEngine::toColoquinte(): Non-leaf instance \"%s\" of \"%s\" has an abutment box but is *not* placed.

what could cause that one?
Comment 34 Jean-Paul Chaput 2020-02-20 18:49:46 GMT
(In reply to Luke Kenneth Casson Leighton from comment #33)
> another error, jean-paul:
> 
> EtesianEngine::toColoquinte(): Non-leaf instance \"%s\" of \"%s\" has an
> abutment box but is *not* placed.
> 
> what could cause that one?

  Probably tied to the one of the $$ and Makefile. Be sure to remove any ".ap" files
  between two runs. And any ".vst" as well. For a given design, always restart from
  a clean slate (better handled by Coriolis for now).

  Concerning the $$ character, this may turn an annoying problem and I think we may
  have to choose a policy regarding it. The Coriolis2 core database, Hurricane is
  completely agnostic about characters in names identifiers. BUT the I/O layer,
  currently AllianceFramework for interacting with Alliance is not. In fact, it has
  been built to manage VHDL, and characters authorized in names reflect that.
    We can convene on a way to sanitize the identifier on the fly when loaded
  by the blif2vst converter (note that VST stands for Vhdl STructural...)
    For example mux$19 ==> mux_u19

  I know it's lazy of me, but if you direct me where I can get your test design
  I may try directly.

  The Borg Collective.
Comment 35 Luke Kenneth Casson Leighton 2020-02-20 19:03:17 GMT
hiya jp ok git clone  gitolite3@libre-riscv.org:soclayout.git

soclayout$ python3 examples/part_sig_add.py
soclayout$ make -f Makefile2

the 2nd Makefile uses nets2.txt which contains $$ substitutions
for $.

i'm currently trying to track down why zero_27...24 have been created
and assigned to carry_in (part_sig_add_cts_r.vst line 1589) and
why zero_24 (and others) is giving Error 38 :width or/and type mismatch
Comment 36 Luke Kenneth Casson Leighton 2020-02-20 20:26:51 GMT
Created attachment 23 [details]
test_part_add for coriolis2 test

okaaay a little easier for you, jean-paul, attaching the auto-generated
test_part_add.il here (so you don't need to install a load of stuff)

this is "make -f Makefile3 lvx" and i'm getting
Instance 'subckt_40_add1_subckt_13_an12_x1' only in netlist 2

and 30 other similar errors

earlier we have "add1" Error 38 line 897 :width or/and type mismatch
earlier we have "add1" Error 38 line 898 :width or/and type mismatch
earlier we have "add1" Error 38 line 899 :width or/and type mismatch
earlier we have "add1" Error 38 line 900 :width or/and type mismatch

MBK_SEPAR='_'; export MBK_SEPAR; /home/lkcl/alliance/install/bin/lvx          vst al test_part_add_cts_r test_part_add_cts_r_ext -f

except that this does not make sense because examining those files
there's nothing at lines 897-900 that matches up with "add1".

this is very weeeird...
Comment 37 Jean-Paul Chaput 2020-02-20 23:51:35 GMT
I commited in alliance-check-toolkit an example for test_part_add.

1. To correct the lvx error, add in the Makefile:
     VST_FLAGS = --vst-use-concat
   My fault here, I did encounter this error before. There is a tricky
   problem in the vst format. The affectation sometimes needs to be in
   the form (in PORT MAP):
      terminal => the_net(2 downto 0)
   And sometimes
      terminal => the_net(2) & the_net(1) & the_net(0)
   The instances messages where a by-product as "add1" was not getting
   loaded in the nestlist.

2. To avoid the "has an abutment box but is not placed" message, add
   the sub-modules to the NETLISTS variable:
                 NETLISTS = test_part_add \
                            ripple        \
                            add1
    The only requirement is that the "top" module is the first item in
    the list.
      This way, when you do a "make clean", no intermediate ap/vst
    files remains.

3. I did modify blif2vst.py, so it should automatically rename '$$'
   into '_unm' (for Uniquified NMigen). Not tested as I was not able
   to run the test_part_add.py module du to the lack of ieee745
   module.
Comment 38 Jean-Paul Chaput 2020-02-20 23:55:09 GMT
Forgot: yosys.py expect the top module to be named "top" in the RTLIL file.
That was not the case in the example file did put in attachment. I did
change it in alliance-check-toolkit.
Comment 39 Luke Kenneth Casson Leighton 2020-02-21 00:05:59 GMT
thanks jean-paul i will try this tomorrow.  nmigen rtlil function has a name parameter, therefore "top" does not have to be hardcoded.

look in soclayout/mk/synthesise-yosys.mk and also examples/*.py
Comment 40 Luke Kenneth Casson Leighton 2020-02-21 00:08:23 GMT
(In reply to Jean-Paul.Chaput from comment #37)

> 3. I did modify blif2vst.py, so it should automatically rename '$$'
>    into '_unm' (for Uniquified NMigen). Not tested as I was not able
>    to run the test_part_add.py module du to the lack of ieee745
>    module.

should be ok python3 examples/alu_hier.py
no external dependencies there
then make -f Makefile2 lvx
i think.
Comment 41 Tobias Platen 2020-02-21 07:40:25 GMT
I tried to run make in the soclayout repo, but one of the included files is missing.

Makefile:15: mk/design-flow.mk: No such file or directory
make: *** No rule to make target 'mk/design-flow.mk'.  Stop.
Comment 42 Jean-Paul Chaput 2020-02-21 10:31:40 GMT
Hello Tobias,

As I'm not the one managing that repository I cannot tell why, but it also
seems to me that it is incomplete.

Nevertheless, I did put in alliance-check-toolkit/benchs/nmigen/alu_hier
an example taken from there that should work (you will have to pull the
latest commit).

Best regards,
Comment 43 Luke Kenneth Casson Leighton 2020-02-21 10:39:42 GMT
(In reply to Jean-Paul.Chaput from comment #42)
> Hello Tobias,
> 
> As I'm not the one managing that repository I cannot tell why, but it also
> seems to me that it is incomplete.

run mksyms.sh in the cwd. i don't like adding duplicate copies of files.  the only reason mk itself is not a symlink ia because you were developing an alternative synthesis-yosys.mk 

> Nevertheless, I did put in alliance-check-toolkit/benchs/nmigen/alu_hier
> an example taken from there that should work (you will have to pull the
> latest commit).

yes that one works, it is test_part_sig.py that goes haywire.
Comment 44 Jean-Paul Chaput 2020-02-21 11:04:13 GMT
OK, I got it configured.

Now, I'm still stuck with ieee754 missing nMigen module.
Comment 45 Luke Kenneth Casson Leighton 2020-02-21 11:54:23 GMT
(In reply to Jean-Paul.Chaput from comment #44)
> OK, I got it configured.
> 
> Now, I'm still stuck with ieee754 missing nMigen module.

https://libre-riscv.org/HDL_workflow/

nmigen install instructions there.  git clone then python setup.py develop (which is an inplace version of install).  i prefer develop not install because you can just git pull new code into the source dir and not have to rerun install.

oh wait you also need nmutils that is listed in HDLworkflow as well
Comment 46 Jean-Paul Chaput 2020-02-21 12:24:08 GMT
It truly was the repository of ieee754 that I needed to clone...
Now, I've a problem with the generated RTLIL:

  Yosys 0.9 (git sha1 UNKNOWN, clang 3.4.2 -fPIC -Os)

  1. Executing ILANG frontend.
  Input filename: part_sig_add.il
  ERROR: Parser error in line 1: syntax error
  make: *** [part_sig_add.blif] Error 1

With the head of the of the il file being:

  [(sig mask), (sig mask), (sig mask), (sig mask)]
  partial 12 16 [4, 8] 5
  partial 8 16 [8, 12] 5

I've less than one week old versions of nMigen and Yosys
Build rpm packages for Scientific Linux for them, you can see
here:
   http://ftp.lip6.fr/pub/linux/distributions/slsoc/soc/7/addons/x86_64/repoview/
Comment 47 Luke Kenneth Casson Leighton 2020-02-21 12:40:24 GMT
(In reply to Jean-Paul.Chaput from comment #46)
> It truly was the repository of ieee754 that I needed to clone...
> Now, I've a problem with the generated RTLIL:
> 
>   Yosys 0.9 (git sha1 UNKNOWN, clang 3.4.2 -fPIC -Os)
> 
>   1. Executing ILANG frontend.
>   Input filename: part_sig_add.il
>   ERROR: Parser error in line 1: syntax error
>   make: *** [part_sig_add.blif] Error 1
> 
> With the head of the of the il file being:
> 
>   [(sig mask), (sig mask), (sig mask), (sig mask)]
>   partial 12 16 [4, 8] 5
>   partial 8 16 [8, 12] 5

oink??

oh wait - i recognise that: that's debug output from stdout.  you're
running "python3 examples/part_sig_add.py > part_sig_add.il" aren't you?

it should be just "python3 examples/part_sig_add.py".

if you have a look at the source you'll see it creates part_sig_add.il
as part of the "create_ilang()" function.

btw so that you get some "sync" statements, i've added a new class
TestAddMod2 and you'll need to do a "git pull" on both soclayout
as well as ieee754fpu.  then:

soclayout$ make -f Makefile2 lvx
Comment 48 Luke Kenneth Casson Leighton 2020-02-21 12:41:25 GMT
(In reply to Luke Kenneth Casson Leighton from comment #47)

> btw so that you get some "sync" statements, i've added a new class
> TestAddMod2 and you'll need to do a "git pull" on both soclayout
> as well as ieee754fpu.  then:
> 
> soclayout$ make -f Makefile2 lvx

sorry:

ieee754fpu$ git pull
soclayout$ git pull
soclayout$ python3 examples/test_part_add.py
soclayout$ make -f Makefile2 lvx
Comment 49 Luke Kenneth Casson Leighton 2020-02-21 13:06:13 GMT
ok another git pull, jean-paul, slowly bashing my way through various
errors:

***** Compare Connections ......................................................
.......................................................

vdd of 'subckt_249_ls_1_subckt_53_sm0_subckt_0_inv_x1' is NOT connected to i1 of
 'subckt_249_ls_1_subckt_4_na2_x1' in netlist 2
through signal subckt_249_ls_1.sm0_mask 1 but to signal vdd
Comment 50 Luke Kenneth Casson Leighton 2020-02-21 13:12:30 GMT
ah

        + reorder$20 [.model]
        + reorder$25 [.model]
        + reorder$5 [.model]
        + ripple [.model]
        + ripple$26 [.model]
        + sm0 [.model]
        + sm1 [.model]
        + sm2 [.model]
[WARNING] In &<id:6355 Instance subckt_242_add_3 add_3>
          Terminal b[0] is connected to POWER/GROUND vdd through the alias $true
.
[WARNING] In &<id:6355 Instance subckt_242_add_3 add_3>
          Terminal b[1] is connected to POWER/GROUND vdd through the alias $true
.
[WARNING] In &<id:6355 Instance subckt_242_add_3 add_3>
          Terminal b[2] is connected to POWER/GROUND vdd through the alias $true
.
[WARNING] In &<id:6355 Instance subckt_242_add_3 add_3>
          Terminal b[3] is connected to POWER/GROUND vdd through the alias $true
.

... *click*.  this is probably because we hard-code the input of an add
cell (add_3) to "all 1s", because it's being used as an inverter.

somewhere the code is going "hey i will set that input to VDD".

i notice also that zero_NN is ignored (not connected up properly)
Comment 51 Jean-Paul Chaput 2020-02-21 14:26:47 GMT
I now did make work test_part_add (Makefile3).
I almost got part_sig_add to work (Makefile2), but lvx fails due to the
fact that some external terminals of the netlist are, in fact, unconnecteds.
That is, in the netlist, you have "carry_in(4)" which is not connected to
any cell. So the router will not generate any physical wire for it, then
the extractor (cougar) will not extract anything for that net, so the
extracted netlist do not have "carry_in(4)" in its interface. Hence the
lvx failure. I know it may not be very practical for designers, but
would it be possible to remove unconnected nets from the interface?
Otherwise we have to edit the vst to remove them (possibly through all
the hierarchy).

Concerning the $$ problem, I don't see it. The generated blif file do
not contain any.

I've also modified the synthesys-yosys.mk so you shouldn't need to patch
it. So you can just make a link of the "mk" directory (ah, maybe not
because of user's profile).

My advice concerning that repository is that you split it, one directory
per test design. I kept running the wrong MakefileX...
Comment 52 Tobias Platen 2020-02-21 14:34:21 GMT
Some dependencies are missing in the Debian archive:
apt-get install libmotif-dev
libmotif-dev : Depends: libxft-dev but it is not going to be installed

But I could install alliance from the debian repo:
apt-get install alliance

Get:1 http://deb.debian.org/debian stretch/main ppc64el libmotif-common all 2.3.4-13 [28.4 kB]
Get:2 http://deb.debian.org/debian stretch/main ppc64el libxm4 ppc64el 2.3.4-13 [897 kB]
Get:3 http://deb.debian.org/debian stretch/main ppc64el alliance ppc64el 5.1.1-1.1+b1 [5006 kB]

Now I need to set the path in mk/users.d
Comment 53 Luke Kenneth Casson Leighton 2020-02-21 15:05:16 GMT
(In reply to Jean-Paul.Chaput from comment #51)
> I now did make work test_part_add (Makefile3).

confirmed here, w00t! :)

> I almost got part_sig_add to work (Makefile2), but lvx fails due to the
> fact that some external terminals of the netlist are, in fact, unconnecteds.
> That is, in the netlist, you have "carry_in(4)" which is not connected to
> any cell.

okaaay, that will be missing somewhere, one of the adders... ah i think
i know which one, it's the one to do with neg_output.

yes, that one, the carry_in is supposed to be wired to "zeros".
more later, my family has arranged a party :)
Comment 54 Jean-Paul Chaput 2020-02-21 15:27:21 GMT
I've commited a last modification in alliance-check-toolkit to check if the
".il" is directly generated by nmigen or if it on stdout.

In test_part_add, two ".il" are generated, which one is the good one?
(anyway, both of them are lvx ok).
Comment 55 Luke Kenneth Casson Leighton 2020-02-21 16:17:05 GMT
(In reply to Jean-Paul.Chaput from comment #54)
> I've commited a last modification in alliance-check-toolkit to check if the
> ".il" is directly generated by nmigen or if it on stdout.

ahh... ok, so the python3 xxx generate -t il only works if the program
is using nmigen.cli.main.  it's typically only used for "examples".

nmigen.cli.main does *not* have the option to make anything "special", including
not allowing it to set the top module to anything other than "top".

however i think it's a good idea what you did.  it will work in both
cases.

> In test_part_add, two ".il" are generated, which one is the good one?

one is used in one Makefile, the other in another.  it was a quick hack.

> (anyway, both of them are lvx ok).

excellent.

that just leaves part_sig_add.py which has a number of places where constants
(1, 0) are connected in large numbers of places.

these are where the warnings in Comment #50 are coming in: they're being
connected to VDD and VSS (correctly), however you can see the netlist
comparator is later getting very confused.
Comment 56 Luke Kenneth Casson Leighton 2020-02-21 16:19:07 GMT
(In reply to Tobias Platen from comment #52)
> Some dependencies are missing in the Debian archive:

the message below says they're not missing, there are conflicts.

> apt-get install libmotif-dev
> libmotif-dev : Depends: libxft-dev but it is not going to be installed

if you've followed the trick of setting up testing as well, you can
try "apt-get -t testing libmotif-dev"

however to track it down you may need to try "apt-get install libxft-dev"
and if that doesn't work keep tracking down, tracking down, until you
get to one that gives a different message.

> 
> But I could install alliance from the debian repo:
> apt-get install alliance
> 
> Get:1 http://deb.debian.org/debian stretch/main ppc64el libmotif-common all
> 2.3.4-13 [28.4 kB]
> Get:2 http://deb.debian.org/debian stretch/main ppc64el libxm4 ppc64el
> 2.3.4-13 [897 kB]
> Get:3 http://deb.debian.org/debian stretch/main ppc64el alliance ppc64el
> 5.1.1-1.1+b1 [5006 kB]
> 
> Now I need to set the path in mk/users.d
Comment 57 Jean-Paul Chaput 2020-02-21 17:49:32 GMT
The warning about signal connected to POWER/GROUND are just warnings.
The Blif loader connect them to special cells "zero_x0" or "one_x0".

I don't know what is the further problem with lvx as I cannot reproduce
it on my end.
Comment 58 Luke Kenneth Casson Leighton 2020-02-21 19:26:17 GMT
(In reply to Jean-Paul.Chaput from comment #57)
> The warning about signal connected to POWER/GROUND are just warnings.
> The Blif loader connect them to special cells "zero_x0" or "one_x0".
> 
> I don't know what is the further problem with lvx as I cannot reproduce
> it on my end.

this is make -f Makefile2 lvx... ah sorry, that's part_sig_add.py that
needs to be turned into .il:

soclayout$ python3 examples/part_sig_add.py
soclayout$ make -f Makefile2 lvx
Comment 59 Luke Kenneth Casson Leighton 2020-02-21 19:29:02 GMT
Created attachment 24 [details]
script output

jean-paul i'm attaching the nohup.out from "make -f Makefile2 lvx"
so you can see it in full.

perhaps... could we exchange by email the full output (all
intermediate files) and do some "diffs" to see if there's anything
obviously different between the two setups?
Comment 60 Luke Kenneth Casson Leighton 2020-02-21 21:18:45 GMT
jean-paul i have an idea, i will make a snaller test with minimal code in it.  also separate out the Makefiles.

i think it really important we get a handle on how different parts of the soc work, and isolate issues into small tests

apologies this lack of experience is clearly testing some assumptions of the code :)
Comment 61 Jean-Paul Chaput 2020-02-22 10:52:08 GMT
Hello Luke,

Could you produce a log file with this change in your
coriolis2/settings.py :

Cfg.getParamBool( 'misc.logMode' ).setBool( True )

This will suppress the "counting" effect in the router's output and
make it easier to compare.

In theory, Coriolis2 is deterministic, but is assumes that we have
exactly the same executing context. For example, that we have the
same Yosys. Maybe we can work from the generated blif file.
I would also need your users-lkcl.mk.

I did go through your log, but did not see obvious problems.
I have a suspicion but I would need to confirm it with the log.

My recommandation would be to have one directory per block.

For the block descriptions, we have a constraint, originally
derived from the fact that we where using VHDL as our main language:
* One signal <-> one external connector (if any)

This allows us to implement efficient hierarchical net walkthrough.

The second step in experiment would be to build a custom regular
block. That is procedural netlist building and matrix like placement,
then automatic routing.

Making an ASIC is still an art...
Comment 62 Luke Kenneth Casson Leighton 2020-02-22 11:33:04 GMT
(In reply to Jean-Paul.Chaput from comment #61)
> Hello Luke,
> 
> Could you produce a log file with this change in your
> coriolis2/settings.py :

http://ftp.libre-riscv.org/soclayout.tgz

> Cfg.getParamBool( 'misc.logMode' ).setBool( True )
> 
> This will suppress the "counting" effect in the router's output and
> make it easier to compare.
> 
> In theory, Coriolis2 is deterministic, but is assumes that we have
> exactly the same executing context. For example, that we have the
> same Yosys. Maybe we can work from the generated blif file.

ok that's in the above .tgz file

> I would also need your users-lkcl.mk.

# Where lkcl gets his tools installeds.

 #export CORIOLIS_TOP  = $(HOME)/coriolis-2.x/$(BUILD_VARIANT)$(LIB_SUFFIX_)/$(BUILD_TYPE_DIR)/install
 #export ALLIANCE_TOP  = $(HOME)/alliance/$(BUILD_VARIANT)$(LIB_SUFFIX_)/install
 export CHECK_TOOLKIT = $(HOME)/alliance-check-toolkit
 export YOSYS_TOP     = /usr


> I did go through your log, but did not see obvious problems.
> I have a suspicion but I would need to confirm it with the log.
> 
> My recommandation would be to have one directory per block.

ok. that sounds like a good idea to me anyway.  mind racing ahead
somewhat, we probably should be creating a way to auto-generate
the entire structure based on information in the actual source
code.

are there examples to start from?

> For the block descriptions, we have a constraint, originally
> derived from the fact that we where using VHDL as our main language:
> * One signal <-> one external connector (if any)
> 
> This allows us to implement efficient hierarchical net walkthrough.

ok.  would like to see that in action.

> The second step in experiment would be to build a custom regular
> block. That is procedural netlist building and matrix like placement,
> then automatic routing.

ooo :)
 
> Making an ASIC is still an art...

lots of small things get in the way, any one of which stops any kind of
incremental progress.  this is why we start small.
Comment 63 Jean-Paul Chaput 2020-02-22 11:49:40 GMT
I will look into it this afternoon.

> ok. that sounds like a good idea to me anyway.  mind racing ahead
> somewhat, we probably should be creating a way to auto-generate
> the entire structure based on information in the actual source
> code.

I'm on the same line of though here. Reproductibility through full
automation is a key point in this line of work. My brain was a
little slow to kick off, but I finally reminded myself that, based on
our experience at building ASICs here, it is fundamental that
we choose *one* target platform, specify the *exact* versions
(git hashes) of each tools to build the design. It can be a chrooted
environment, a docker container, a full VM image or whatever so
all people works exactly the same. We may provide the image and the
fully automated procedure to rebuild it from scratch.

Considering a chrooted Debian 9, we should use the *same* user inside
it. Ideally the same name/UID, but keeping a common UID across various
systems may be difficult, so at least the same name. And publish the
whole constructed chrooted filesystem (bit tarball).

Running nMigen outside the chroot is a quick hack, but the kind that
must be avoided. I've absolutely nothing against hack, as long as
they can be automated.
Comment 64 Luke Kenneth Casson Leighton 2020-02-22 12:08:10 GMT
(In reply to Jean-Paul.Chaput from comment #63)
> I will look into it this afternoon.
> 
> > ok. that sounds like a good idea to me anyway.  mind racing ahead
> > somewhat, we probably should be creating a way to auto-generate
> > the entire structure based on information in the actual source
> > code.
> 
> I'm on the same line of though here. Reproductibility through full
> automation is a key point in this line of work. My brain was a
> little slow to kick off, but I finally reminded myself that, based on
> our experience at building ASICs here, it is fundamental that
> we choose *one* target platform, specify the *exact* versions
> (git hashes) of each tools to build the design.

eurk.  yeah.

> It can be a chrooted
> environment, a docker container, a full VM image or whatever so
> all people works exactly the same. We may provide the image and the
> fully automated procedure to rebuild it from scratch.

hmmm... ok.
 
> Considering a chrooted Debian 9, we should use the *same* user inside
> it. 

that makes sense.  not keen on it, as it becomes inconvenient to
schroot into.  mind you if that's done with a script (outside) it's
ok.

> Ideally the same name/UID, but keeping a common UID across various
> systems may be difficult, so at least the same name. 

that's a good idea.

> And publish the
> whole constructed chrooted filesystem (bit tarball).

bleuch.

> Running nMigen outside the chroot is a quick hack, but the kind that
> must be avoided. I've absolutely nothing against hack, as long as
> they can be automated.

i've installed nmigen inside.  it was a bit of a nuisance but doable.

debian 9 honestly is too old (no python 3.6), and stretch-backports
doesn't included it (the normal and stable way you'd include later
software)
Comment 65 Luke Kenneth Casson Leighton 2020-02-22 13:23:36 GMT
btw remember, jeanpaul, we aim to break the layout into blocks (cells) hierarchically, anyway, so that if necessary we can reuse some (particularly the large FPU ALU blocks) and also do a little more control over routing, as well as add in GND VIA rings around blocks.

so, learning how to do blocks early would be good.
Comment 66 Jean-Paul Chaput 2020-02-22 14:26:55 GMT
> > I'm on the same line of though here. Reproductibility through full
> > automation is a key point in this line of work. My brain was a
> > little slow to kick off, but I finally reminded myself that, based on
> > our experience at building ASICs here, it is fundamental that
> > we choose *one* target platform, specify the *exact* versions
> > (git hashes) of each tools to build the design.
> 
> eurk.  yeah.

  I know it looks a bit like dictatorship. And the end goal is to run
  on any sufficiently new system. But this way should allow us to build
  and debug more quickly by suppressing any "external" cause of
  problems outside the tool we are using and the design itself.
    People can then port from that master reference to others
  system.
    The proposed method is to focus on one system to completely build
  the design quickly, and only then, expand to other systems.

> > It can be a chrooted
> > environment, a docker container, a full VM image or whatever so
> > all people works exactly the same. We may provide the image and the
> > fully automated procedure to rebuild it from scratch.
> 
> hmmm... ok.
>  
> > Considering a chrooted Debian 9, we should use the *same* user inside
> > it. 
> 
> that makes sense.  not keen on it, as it becomes inconvenient to
> schroot into.  mind you if that's done with a script (outside) it's
> ok.
> 
> > Ideally the same name/UID, but keeping a common UID across various
> > systems may be difficult, so at least the same name. 
> 
> that's a good idea.
> 
> > And publish the
> > whole constructed chrooted filesystem (bit tarball).
> 
> bleuch.

  When we did ASICs, we did take a whole snapshot of all the tools
  along with design to be sure we can rebuild it later. Almost
  mothballed the computer also...

> > Running nMigen outside the chroot is a quick hack, but the kind that
> > must be avoided. I've absolutely nothing against hack, as long as
> > they can be automated.
> 
> i've installed nmigen inside.  it was a bit of a nuisance but doable.
> 
> debian 9 honestly is too old (no python 3.6), and stretch-backports
> doesn't included it (the normal and stable way you'd include later
> software)

So why not use Debian 10 ?
Comment 67 Jean-Paul Chaput 2020-02-22 14:32:21 GMT
(In reply to Luke Kenneth Casson Leighton from comment #65)
> btw remember, jeanpaul, we aim to break the layout into blocks (cells)
> hierarchically, anyway, so that if necessary we can reuse some (particularly
> the large FPU ALU blocks) and also do a little more control over routing, as
> well as add in GND VIA rings around blocks.
> 
> so, learning how to do blocks early would be good.

No problem. You should have a look to the RingOscillator bench in
alliance-check-toolkit. Fully manually placed & routed block.
Shows all the technique you can use.

For FPU block, the P&R should be used unless you have a clear idea
of the placement. But from what I know, it is not trivial.
Then with a script, wrap up the whole in a ring of VIAs.
Comment 68 Luke Kenneth Casson Leighton 2020-02-22 15:22:52 GMT
(In reply to Jean-Paul.Chaput from comment #67)
> (In reply to Luke Kenneth Casson Leighton from comment #65)
> > btw remember, jeanpaul, we aim to break the layout into blocks (cells)
> > hierarchically, anyway, so that if necessary we can reuse some (particularly
> > the large FPU ALU blocks) and also do a little more control over routing, as
> > well as add in GND VIA rings around blocks.
> > 
> > so, learning how to do blocks early would be good.
> 
> No problem. You should have a look to the RingOscillator bench in
> alliance-check-toolkit. Fully manually placed & routed block.
> Shows all the technique you can use.

okaaay. is there a way to visualise that?  i notice "make view" is missing?

> For FPU block, the P&R should be used unless you have a clear idea
> of the placement. But from what I know, it is not trivial.

the blocks are just a long chain.  combinatorial block, registers,
combinatorial block, registers, repeat, repeat.

the direction from each block is forward-only.  in other words, inputs
come in exclusively on left, outputs go out exclusively on right.

so what i don't want to see happening is a massive mess of several
hundreds of thousand gates.

instead what i would like to see is a defined "width" parameter, set for
all "blocks", then the connections between each are defined such that
the inputs from one block come "straight" from the outputs of the previous.

this i think is very similar to the RingOscillator, which is in effect
a type of pipeline.

> Then with a script, wrap up the whole in a ring of VIAs.

ok.
Comment 69 Luke Kenneth Casson Leighton 2020-02-22 15:23:39 GMT
(In reply to Jean-Paul.Chaput from comment #66)

> > debian 9 honestly is too old (no python 3.6), and stretch-backports
> > doesn't included it (the normal and stable way you'd include later
> > software)
> 
> So why not use Debian 10 ?

i didn't know it might work! :)

will set that up soon.
Comment 70 Jean-Paul Chaput 2020-02-22 15:43:07 GMT
> > No problem. You should have a look to the RingOscillator bench in
> > alliance-check-toolkit. Fully manually placed & routed block.
> > Shows all the technique you can use.
> 
> okaaay. is there a way to visualise that?  i notice "make view" is missing?

  make cgt

  Then load the "ringoscillator" design.

> > For FPU block, the P&R should be used unless you have a clear idea
> > of the placement. But from what I know, it is not trivial.
> 
> the blocks are just a long chain.  combinatorial block, registers,
> combinatorial block, registers, repeat, repeat.
> 
> the direction from each block is forward-only.  in other words, inputs
> come in exclusively on left, outputs go out exclusively on right.
> 
> so what i don't want to see happening is a massive mess of several
> hundreds of thousand gates.
> 
> instead what i would like to see is a defined "width" parameter, set for
> all "blocks", then the connections between each are defined such that
> the inputs from one block come "straight" from the outputs of the previous.

  OK, remaining question may be do you have a placement in mind for
  each combinatorial block or do we use the placer?
Comment 71 Luke Kenneth Casson Leighton 2020-02-22 16:41:40 GMT
(In reply to Jean-Paul.Chaput from comment #70)

> > okaaay. is there a way to visualise that?  i notice "make view" is missing?
> 
>   make cgt

ah yes i remember.
 
>   Then load the "ringoscillator" design.

got it!  thank you.

>   OK, remaining question may be do you have a placement in mind for
>   each combinatorial block or do we use the placer?

not really, i would imagine that the router is intelligent enough to
realise that cells nearest to inputs should be placed near to inputs,
and cells nearest to outputs likewise to outputs, and leave it at that.

even within the combinatorial blocks the paths are pretty straightforward
and go pretty much in a directed graph.

if however on visual inspection it is clear that that is going wrong,
we can adjust accordingly.
Comment 72 Luke Kenneth Casson Leighton 2020-02-22 16:46:43 GMT
i moved the failing example (part_sig_add.py) to experiments2.

i will cut as much out of it as possible.
Comment 73 Luke Kenneth Casson Leighton 2020-02-23 00:12:41 GMT
okaaay, i have a sneaking suspicion i know what might be going on.
two things:

nmigen allows signals to be declared "zero width".  these
don't do anything, they don't get "actioned", and so on.
however they end up in the yosys output.

secondly: i fixed one of these, replacing it with a module
that assigns a constant to a Signal.  here's the yosys code:


module \sm3
  attribute \src "/home/lkcl/ieee754fpu/src/ieee754/part_shift/part_shift_dynami
c.py:25"
  wire width 7 output 0 \mask
  wire width 1 $verilog_initial_trigger
  process $group_0
    assign \mask 7'0000000
    assign \mask 7'0000011
    assign $verilog_initial_trigger $verilog_initial_trigger
    sync init
      update $verilog_initial_trigger 1'0
  end
end


and here's the resultant vst:

entity sm3 is
  port ( vdd : linkage bit
       ; vss : linkage bit
       );
end sm3;

architecture structural of sm3 is



begin

end structural;


note how it's empty?  it also results in sm3.ap not being created.
further on, we end up with "EtesianEngine::toColoquinte() cannot manage
unplaced block, cell height is -2"

(because it's empty)


i also ran into this problem a number of times when i tried to cut out
code (for debugging purposes) by assigning a Const to an output Signal,
expecting the VHDL in that Cell to simply hard-set the output.

however it seems that the result is that no VHDL is generated at *all*.
Comment 74 Luke Kenneth Casson Leighton 2020-02-23 16:10:28 GMT
jean paul i have a better handle on this now.

i have some ideas.

the problems we have seem to revolve around constants and around modules, both constants *in* modules (particularly combinatorial blocks) and passing *in* constants.

yosys, being written in c, is not a good place to fix those.

therefore i think a good thing to do is a nmigen AST rewriter, very similar to python lib2to3, which looks for patterns and augments the AST.

for example if a module is passed a Const parameter, the function parameter is REPLACED with the Const, even before yosys sees the ILANG file.

i will make sure a budget is available to do that.
Comment 75 Staf Verhaegen 2020-02-23 17:13:54 GMT
I think the problem is that nmigen assumes you will flatten and optimize the design in yosys. This optimization after flattening should normally take care of constant propagation.
Comment 76 Luke Kenneth Casson Leighton 2020-02-23 17:55:11 GMT
(In reply to Staf Verhaegen from comment #75)
> I think the problem is that nmigen assumes you will flatten and optimize the
> design in yosys. This optimization after flattening should normally take
> care of constant propagation.

ah yes that makes sense, it explains a lot, particularly that on flatten it worked great.

unfortunately as we will be looking at around maybe 30 mm^2 last time we calculated it, if you remember? if we have appx 25000 gates per mm^2 in 180nm we are at around 500,000 gates.

we simply cannot do a design that large with full flattening.

hence the multi stage approach.
Comment 77 Jean-Paul Chaput 2020-02-23 18:15:54 GMT
I'm still investigating the problem of different results between your
version and mine. I made progress and indeed the problem is clearly
around constants generation.

I can reproduce your result if it take the blif file generated by your
version of Yosys. I did locate where the problem comes from analysing
the lvx message (which I agree, is very obfuscated). By looking to
the generated vst file I do see abnormal connections like normal
I/O directly connected to vdd (this should *never* happen with
Alliance/Coriolis). You can see that at 803 of ls_1.vst (instance
subckt_53_sm0).

This results from what is written in part_sig_add.blif starting line
2832. If I understand blif format correctly, this is the truth table
for gates[0], considered as a *logic signal* inside the "ls_1" model
(telling that in fact, gates[0] <= pmask[0]). But this is wrong,
because gates[0] is *not* a *signal* but the *connector* of the
"sm0" subckt (aka, instance). So I humbly suspect that Yosys did
write something wrong. Then the blif2vst tries to make sense of it
and generate strange thing, there I should have a better error
detection and stop with an error.

I'm now in the process of checking with various versions of Yosys
to see if it's a corrected bug or one that did just appears.

It is a good illustration about everyone having exactly the same
versions of the tools to get reproductible results and errors so
we can converge faster.
Comment 78 Luke Kenneth Casson Leighton 2020-02-23 18:24:00 GMT
yuk! :)

i am using yosys as of yesterday, now.  can you do yosys -V and let me know?

i can then do binary search compiling different versions, looking for the commit that fails.  i have ccache installed and a stupidly high resource laptop.  64GB DDR4 RAM, 8 core i9, 2TB NVMe 2500mbytes/sec muhahaha

if you let me know the yosys version you use it sets a lower bound on the git version to start from.
Comment 79 Luke Kenneth Casson Leighton 2020-02-23 18:27:01 GMT
or, if you enjoy the task of binary search recompiling yosys feel free jp :)
Comment 80 Jean-Paul Chaput 2020-02-23 18:43:49 GMT
(In reply to Luke Kenneth Casson Leighton from comment #78)
> yuk! :)
> 
> i am using yosys as of yesterday, now.  can you do yosys -V and let me know?
> 
> i can then do binary search compiling different versions, looking for the
> commit that fails.  i have ccache installed and a stupidly high resource
> laptop.  64GB DDR4 RAM, 8 core i9, 2TB NVMe 2500mbytes/sec muhahaha
> 
> if you let me know the yosys version you use it sets a lower bound on the
> git version to start from.

I did use this one, less than a week ago.

https://github.com/cliffordwolf/yosys/archive/yosys-0.9.tar.gz

I'm also starting to perform some rebuild on my laptop,
even if it is much more modest!
Comment 81 Jean-Paul Chaput 2020-02-23 18:44:43 GMT
(In reply to Luke Kenneth Casson Leighton from comment #79)
> or, if you enjoy the task of binary search recompiling yosys feel free jp :)

No, please do !
I will just make additional tests.
Comment 82 Staf Verhaegen 2020-02-23 18:56:00 GMT
> unfortunately as we will be looking at around maybe 30 mm^2 last time we
> calculated it, if you remember? if we have appx 25000 gates per mm^2 in
> 180nm we are at around 500,000 gates.
> 
> we simply cannot do a design that large with full flattening.
> 
> hence the multi stage approach.

Why not ? It's common approach for P&R. Contrary to functions in software source code you don't gain anything by not flattening modules. You only block possible optimizations.

And if you really want to keep it hierarchical then I think the right approach is to add a special step inside that selectively duplicates modules with constant parameters. This will indeed involve C++ programming but why do want to exclude that option for this reason ?
Comment 83 Jacob Lifshay 2020-02-23 19:07:57 GMT
(In reply to Staf Verhaegen from comment #82)
> > unfortunately as we will be looking at around maybe 30 mm^2 last time we
> > calculated it, if you remember? if we have appx 25000 gates per mm^2 in
> > 180nm we are at around 500,000 gates.
> > 
> > we simply cannot do a design that large with full flattening.
> > 
> > hence the multi stage approach.
> 
> Why not ? It's common approach for P&R. Contrary to functions in software
> source code you don't gain anything by not flattening modules. You only
> block possible optimizations.

+1

> 
> And if you really want to keep it hierarchical then I think the right
> approach is to add a special step inside that selectively duplicates modules
> with constant parameters. This will indeed involve C++ programming but why
> do want to exclude that option for this reason ?

Similar to interprocedural constant propagation and function replication in LLVM.
Comment 84 Jean-Paul Chaput 2020-02-23 19:24:13 GMT
(In reply to Jacob Lifshay from comment #83)
> (In reply to Staf Verhaegen from comment #82)
> > > unfortunately as we will be looking at around maybe 30 mm^2 last time we
> > > calculated it, if you remember? if we have appx 25000 gates per mm^2 in
> > > 180nm we are at around 500,000 gates.
> > > 
> > > we simply cannot do a design that large with full flattening.
> > > 
> > > hence the multi stage approach.
> > 
> > Why not ? It's common approach for P&R. Contrary to functions in software
> > source code you don't gain anything by not flattening modules. You only
> > block possible optimizations.

  Set asides that I don't like the full flatten approach, the placer
  and P&R of Coriolis have never been tested with so big designs
  (because there wasn't any until very recenlty). So, to play it
  safe, it is best if there is at least a "plan B" with a design
  broken down in sub-units.
Comment 85 Staf Verhaegen 2020-02-24 09:29:19 GMT
Reason why the P&R flows have evolved from doing P&R on subbblocks to P&R on full design is because of the delay caused by the interconnect parasitics.
When scaling, the importance of the interconnect capacitive load becomes the main cause for delays in a chip and is given anymore by the input gate capacitance of the cells connected to the path. Therefor timing driven placement is needed where during placement the capacitive load on the time critical paths is taken into account. The proprietary tools even have options to optimize the synthesized logic during placement, e.g. the tool can tranform the logic on the critical paths in equivalent logic with better delay behavior for the current design and it's placement.

In a big chip the biggest delay is likely seen on the high fan-out nets connecting different blocks, e.g. the buses. When doing P&R on the subblocks and connect the blocks later on a higher level you block the timing driven optimization of these paths by the placer and you need to do it yourself during floorplanning and be able to guide the placer.

For the prototype it is not strictly necessary because the effect is that the chip will only be able to be run at a lower frequency than what one would predict; my estimate is around 70% of the clock frequency of what is predicted when not taking the interconnect parasitics into account. But I do hope the prototype is already used to see how the P&R has to evolve for future scaling.
Comment 86 Luke Kenneth Casson Leighton 2020-02-24 10:43:28 GMT
(In reply to Jean-Paul.Chaput from comment #80)
> (In reply to Luke Kenneth Casson Leighton from comment #78)
> > yuk! :)
> > 
> > i am using yosys as of yesterday, now.  can you do yosys -V and let me know?
> > 
> > i can then do binary search compiling different versions, looking for the
> > commit that fails.  i have ccache installed and a stupidly high resource
> > laptop.  64GB DDR4 RAM, 8 core i9, 2TB NVMe 2500mbytes/sec muhahaha
> > 
> > if you let me know the yosys version you use it sets a lower bound on the
> > git version to start from.
> 
> I did use this one, less than a week ago.
> 
> https://github.com/cliffordwolf/yosys/archive/yosys-0.9.tar.gz

ok that appears to be git tagged:

commit 1979e0b1f2482dbf0562f5116ab444280a377773
Author: Clifford Wolf <clifford@clifford.at>
Date:   Mon Aug 26 10:37:53 2019 +0200

    Yosys 0.9
    
    Signed-off-by: Clifford Wolf <clifford@clifford.at>


although it is easier to track down / confirm with "yosys -V".  hmmm, i don't think it will report the version correctly though if built from a .tgz

i'll *assume* the git tag yosys-0.9 was where that tarball came from.
Comment 87 Jean-Paul Chaput 2020-02-24 11:08:26 GMT
(In reply to Luke Kenneth Casson Leighton from comment #86)
> (In reply to Jean-Paul.Chaput from comment #80)
> > (In reply to Luke Kenneth Casson Leighton from comment #78)
> > > yuk! :)
> > > 
> > > i am using yosys as of yesterday, now.  can you do yosys -V and let me know?
> > > 
> > > i can then do binary search compiling different versions, looking for the
> > > commit that fails.  i have ccache installed and a stupidly high resource
> > > laptop.  64GB DDR4 RAM, 8 core i9, 2TB NVMe 2500mbytes/sec muhahaha
> > > 
> > > if you let me know the yosys version you use it sets a lower bound on the
> > > git version to start from.
> > 
> > I did use this one, less than a week ago.
> > 
> > https://github.com/cliffordwolf/yosys/archive/yosys-0.9.tar.gz
> 
> ok that appears to be git tagged:
> 
> commit 1979e0b1f2482dbf0562f5116ab444280a377773
> Author: Clifford Wolf <clifford@clifford.at>
> Date:   Mon Aug 26 10:37:53 2019 +0200
> 
>     Yosys 0.9
>     
>     Signed-off-by: Clifford Wolf <clifford@clifford.at>
> 
> 
> although it is easier to track down / confirm with "yosys -V".  hmmm, i
> don't think it will report the version correctly though if built from a .tgz
> 
> i'll *assume* the git tag yosys-0.9 was where that tarball came from.

Hello Luke,

Yes, that's the right one and, yes when built from a tar.gz it do not
report it's version correctly. I've build both versions.

BUT, I was half wrong about the bug. If I persist to think that the
blif file is incorrect, this was not causing the lvx error.
I'm correcting the Coriolis blif parser to account for the righ one.
Strange thing is that now the P&R always fail, whatever the version.
I must have made something wrong in my previous tests (some cached
file perhaps). Sorry for the wrong lead.

This highlight the fact that we must have a way to confirm that the
P&R design is really the nMigen one, at least by re-simulating it.
Comment 88 Luke Kenneth Casson Leighton 2020-02-24 11:46:18 GMT
(In reply to Jean-Paul.Chaput from comment #87)
> (In reply to Luke Kenneth Casson Leighton from comment #86)
> > (In reply to Jean-Paul.Chaput from comment #80)
> > > (In reply to Luke Kenneth Casson Leighton from comment #78)
> > > > yuk! :)
> > > > 
> > > > i am using yosys as of yesterday, now.  can you do yosys -V and let me know?
> > > > 
> > > > i can then do binary search compiling different versions, looking for the
> > > > commit that fails.  i have ccache installed and a stupidly high resource
> > > > laptop.  64GB DDR4 RAM, 8 core i9, 2TB NVMe 2500mbytes/sec muhahaha
> > > > 
> > > > if you let me know the yosys version you use it sets a lower bound on the
> > > > git version to start from.
> > > 
> > > I did use this one, less than a week ago.
> > > 
> > > https://github.com/cliffordwolf/yosys/archive/yosys-0.9.tar.gz
> > 
> > ok that appears to be git tagged:
> > 
> > commit 1979e0b1f2482dbf0562f5116ab444280a377773
> > Author: Clifford Wolf <clifford@clifford.at>
> > Date:   Mon Aug 26 10:37:53 2019 +0200
> > 
> >     Yosys 0.9
> >     
> >     Signed-off-by: Clifford Wolf <clifford@clifford.at>
> > 
> > 
> > although it is easier to track down / confirm with "yosys -V".  hmmm, i
> > don't think it will report the version correctly though if built from a .tgz
> > 
> > i'll *assume* the git tag yosys-0.9 was where that tarball came from.
> 
> Hello Luke,
> 
> Yes, that's the right one and, yes when built from a tar.gz it do not
> report it's version correctly. I've build both versions.

just tried it: no repro yet, because *sigh* i've moved on, on both
ieee754fpu and the experiments/* trying to randomly "fix" things over
the past 2 days.
 
> BUT, I was half wrong about the bug. If I persist to think that the
> blif file is incorrect, this was not causing the lvx error.

ah interesting.

> I'm correcting the Coriolis blif parser to account for the righ one.

ok.  and that's from comment #77


> Strange thing is that now the P&R always fail, whatever the version.
> I must have made something wrong in my previous tests (some cached
> file perhaps). 

i have found that sometimes even "make clean" and rm * and git reset --hard
will result, about 1/40 times, the router or something else going awry.

> Sorry for the wrong lead.

if you did a "git pull" on either soclayout or ieee754fpu then there
is *another* error causing build failures (here as well).
 
i must apologise that the designs, even the "simple" ones, are large
enough that creating even a small repro case is itself very tricky.
Comment 89 Luke Kenneth Casson Leighton 2020-02-24 12:00:03 GMT
(In reply to Luke Kenneth Casson Leighton from comment #88)

> > BUT, I was half wrong about the bug. If I persist to think that the
> > blif file is incorrect, this was not causing the lvx error.
> 
> ah interesting.
> 
> > I'm correcting the Coriolis blif parser to account for the righ one.
> 
> ok.  and that's from comment #77

wait... so if you use the *same* blif file, the error was fixed "once"
but thereafter it works, is that right?

because i just unpacked the tarball here:
http://ftp.libre-riscv.org/soclayout.tgz

and i can confirm that yes, the same vss errors occur, even with
yosys 0.9.
Comment 90 Luke Kenneth Casson Leighton 2020-02-24 15:38:11 GMT
ah.

here's a clue, jean-paul:

  component sm1
    port ( clk   : in bit
         ; rst   : in bit
         ; gates : in bit_vector(1 downto 0)
         ; mask_2 : out bit
         ; mask_0 : out bit
         -- Vector <mask> is holed, unvectorized.
         ; vdd   : linkage bit
         ; vss   : linkage bit
         );
  end component;


  subckt_54_sm1 : sm1
  port map ( clk   => clk
           , rst   => rst
           , gates => pmask(2 downto 1)
           , mask  => 
           , vdd   => vdd
           , vss   => vss
           );

and if you look at the blif file:

.model sm1
.inputs clk gates[0] gates[1] rst
.outputs mask[0] mask[1] mask[2]
.names $false
.names $true
1
.names $undef
.subckt inv_x1 i=rst nq=mask$next[1]
.subckt no2_x1 i0=rst i1=gates[0] nq=mask$next[2]
.subckt sff1_x4 ck=clk i=mask$next[1] q=mask[1]
.subckt sff1_x4 ck=clk i=mask$next[2] q=mask[2]
.names $undef bits[1]
1 1
.names mask[1] mask[0]
1 1
.names mask$next[1] mask$next[0]
1 1
.end

err.... that's not "unvectorised".  i checked the .il file

basically there's some definite corruption going on from BLIF to VST.
Comment 91 Jean-Paul Chaput 2020-02-24 15:54:52 GMT
(In reply to Luke Kenneth Casson Leighton from comment #90)
> ah.
> 
> here's a clue, jean-paul:
> 
>   component sm1
>     port ( clk   : in bit
>          ; rst   : in bit
>          ; gates : in bit_vector(1 downto 0)
>          ; mask_2 : out bit
>          ; mask_0 : out bit
>          -- Vector <mask> is holed, unvectorized.
>          ; vdd   : linkage bit
>          ; vss   : linkage bit
>          );
>   end component;
> 
> 
>   subckt_54_sm1 : sm1
>   port map ( clk   => clk
>            , rst   => rst
>            , gates => pmask(2 downto 1)
>            , mask  => 
>            , vdd   => vdd
>            , vss   => vss
>            );
> 
> and if you look at the blif file:
> 
> .model sm1
> .inputs clk gates[0] gates[1] rst
> .outputs mask[0] mask[1] mask[2]
> .names $false
> .names $true
> 1
> .names $undef
> .subckt inv_x1 i=rst nq=mask$next[1]
> .subckt no2_x1 i0=rst i1=gates[0] nq=mask$next[2]
> .subckt sff1_x4 ck=clk i=mask$next[1] q=mask[1]
> .subckt sff1_x4 ck=clk i=mask$next[2] q=mask[2]
> .names $undef bits[1]
> 1 1
> .names mask[1] mask[0]
> 1 1
> .names mask$next[1] mask$next[0]
> 1 1
> .end
> 
> err.... that's not "unvectorised".  i checked the .il file
> 
> basically there's some definite corruption going on from BLIF to VST.

Yes and no (Normand response).

I've found similar problem in sm0, and that is what was causing the lvx
problem. An output was connected to "one" which is vdd and the
intermediate cell "one" was not inserted (you cannot directly connect
vdd to logical signal). I'm about to correct that.

The problem you found is different. It is is the "one internal signal
for two external connectors". In the blif file, the lines:

.names mask[1] mask[0]
1 1

means that mask[0] is an alias of mask[1] (it copies it's truth table).
So Coriolis *merge* the two signals, leaving only mask[0], hence the
"hole" in the vector as mask[1] is destroyed.

Would it be possible for this to not happen in the nMigen description?
One dirty fix to keep the interface untouched would be for Coriolis
to insert a buffer between those two signals, so the become identical
but different net (and with some delay addes).
Comment 92 Luke Kenneth Casson Leighton 2020-02-24 17:21:27 GMT
(In reply to Jean-Paul.Chaput from comment #91)

> > basically there's some definite corruption going on from BLIF to VST.
> 
> Yes and no (Normand response).
> 
> I've found similar problem in sm0, and that is what was causing the lvx
> problem. An output was connected to "one" which is vdd and the
> intermediate cell "one" was not inserted (you cannot directly connect
> vdd to logical signal). I'm about to correct that.
> 
> The problem you found is different. It is is the "one internal signal
> for two external connectors". In the blif file, the lines:
> 
> .names mask[1] mask[0]
> 1 1
> 
> means that mask[0] is an alias of mask[1] (it copies it's truth table).
> So Coriolis *merge* the two signals, leaving only mask[0], hence the
> "hole" in the vector as mask[1] is destroyed.

and on the current version i am looking at, at the moment:

.model sm2
.inputs gates
.outputs mask[0] mask[1] mask[2]
.names $false
.names $true
1
.names $undef
.subckt inv_x1 i=gates nq=mask[2]
.names mask[2] bit0
1 1
.names mask[2] bits
1 1
.names $true mask[0]
1 1
.names $true mask[1]
1 1
.end

and this results in this:

  subckt_54_sm1 : sm1
  port map ( gates => pmask(2 downto 1)
           , mask  => sm1_mask(2)
           , vdd   => sm1_mask(1)
           , vss   => vss
           );

so there are quite a lot of those overwrites

> Would it be possible for this to not happen in the nMigen description?

it's waaay beyond what nmigen is doing: it's what yosys is doing with the ilang file.  i did "read_blif part_sig_add.blif" then "show" and also "write_ilang part_sig_add2.il" and it's radically different.

basically, the output from yosys is so "optimised" (so much removed and
topologically changed) that even attempting to "stop" nmigen from doing
this, i am overwhelmed even just considering the task.

honestly i suspect that trying the route of modifying nmigen, there would
be so many holes that it would be virtually impossible to cover all of them.


> One dirty fix to keep the interface untouched would be for Coriolis
> to insert a buffer between those two signals, so the become identical
> but different net (and with some delay addes).

if that gets us something "working" so that later it can be fixed /
optimised, *great*.

even if that doesn't happen, honestly i do not feel it is critical
(not unless it increases the overall chip size by more than... 60%
to put a finger-in-air number).
Comment 93 Jean-Paul Chaput 2020-02-24 17:58:28 GMT
I've just commited a bug fix in Coriolis so the "part_sig_add" example
should work. Dont't hesitate to double check the generated vst to
confirm that the interfaces are OK now.

And we absolutely need an independant way to check nMigen vs. extracted
netlist.
Comment 94 Luke Kenneth Casson Leighton 2020-02-24 18:21:57 GMT
(In reply to Jean-Paul.Chaput from comment #93)
> I've just commited a bug fix in Coriolis so the "part_sig_add" example
> should work. 

found the repo via a git announce email: https://gitlab.lip6.fr/jpc/coriolis

ah! :)

> Dont't hesitate to double check the generated vst to
> confirm that the interfaces are OK now.

there's nothing missing, they're lined up in ls_1.vst, it looks good
jean-paul

> And we absolutely need an independant way to check nMigen vs. extracted
> netlist.

yyeah, beyond that, i have a feeling that one of the important tasks here
will be to work out how to do simulations.  we have unit tests: hmm, they're in
python.  i noticed the RingOscillator simulator input is actually in c.  have to think about that.

thank you!

... next challenge... :)  (am currently working out how to do an ioring,
by looking at the various examples.  i decided to try to "duplicate"
adder - except in nmigen - as much as possible, even to the point of
keeping the names of the inputs and outputs the same).
Comment 95 Luke Kenneth Casson Leighton 2020-02-24 21:33:18 GMT
(In reply to Luke Kenneth Casson Leighton from comment #94)

> ... next challenge... :)  (am currently working out how to do an ioring,
> by looking at the various examples.  i decided to try to "duplicate"
> adder - except in nmigen - as much as possible, even to the point of
> keeping the names of the inputs and outputs the same).

... it doesn't stop... :)

in experiments4 there is a series of mismatches between the cts_r
and ext nets



ck of 'b_2' is NOT connected to ck of 'a_3' in netlist 2
through signal mbk_sig35 but to signal mbk_sig62

ck of 'b_3' is NOT connected to ck of 'a_3' in netlist 2
through signal mbk_sig35 but to signal mbk_sig45


however "make view" actually works which is a really nice surprise.

it is near-identical to the adder example, both the Makefile and ioring.py
any clues?
Comment 96 Luke Kenneth Casson Leighton 2020-02-24 22:51:53 GMT
jp a little thought, the place to raise that bug is with yosys because it is valid ilang and syntactically valid blif.
Comment 97 Jean-Paul Chaput 2020-02-24 23:12:41 GMT
(In reply to Luke Kenneth Casson Leighton from comment #96)
> jp a little thought, the place to raise that bug is with yosys because it is
> valid ilang and syntactically valid blif.

  Yes, I would think so. Assuming I guessed right...
Comment 98 Jean-Paul Chaput 2020-02-24 23:17:42 GMT
(In reply to Luke Kenneth Casson Leighton from comment #95)
> (In reply to Luke Kenneth Casson Leighton from comment #94)
> 
> > ... next challenge... :)  (am currently working out how to do an ioring,
> > by looking at the various examples.  i decided to try to "duplicate"
> > adder - except in nmigen - as much as possible, even to the point of
> > keeping the names of the inputs and outputs the same).
> 
> ... it doesn't stop... :)
> 
> in experiments4 there is a series of mismatches between the cts_r
> and ext nets
> 
> 
> 
> ck of 'b_2' is NOT connected to ck of 'a_3' in netlist 2
> through signal mbk_sig35 but to signal mbk_sig62
> 
> ck of 'b_3' is NOT connected to ck of 'a_3' in netlist 2
> through signal mbk_sig35 but to signal mbk_sig45
> 
> 
> however "make view" actually works which is a really nice surprise.
> 
> it is near-identical to the adder example, both the Makefile and ioring.py
> any clues?

  I will take a look. But I think there is no need to try to make
  a full design with pads, only a core.
    Foundries are usually very touchy about the pads and allows only
  the one they validates. So I assume that Staf will add the pads
  himself with the ones that TSMC will supply.
    I know you're reading us Staf ;-)
Comment 99 Luke Kenneth Casson Leighton 2020-02-24 23:30:51 GMT

On Monday, February 24, 2020, <bugzilla-daemon@libre-riscv.org> wrote:
http://bugs.libre-riscv.org/show_bug.cgi?id=178


> it is near-identical to the adder example, both the Makefile and ioring.py
> any clues?

  I will take a look. But I think there is no need to try to make
  a full design with pads, only a core.

yeees... except we need to make a dummy one so that the routing on the floorplan matches the NSEW entry/exit points and order.

in particular, because we are doing a QFP for this test ASIC, and the GND and VDD are used for EM shielding between GPIO pins that will be up to 150mhz in a few cases, the bond wires have to go in the right order (they cannot cross pads)

unlike a BGA which sits on a PCB and the layers sort out any inconvenient mess of arbitrary exit order.




 
    Foundries are usually very touchy about the pads and allows only
  the one they validates. So I assume that Staf will add the pads
  himself with the ones that TSMC will supply.
    I know you're reading us Staf ;-)

:)
Comment 100 Jean-Paul Chaput 2020-02-25 09:21:47 GMT
Hello Luke,

I cannot access the git repository anymore. Either through ssh or https.
Tried with libre-riscv and libre-soc also. And looked through the doc
for any specific instructions but did not found any.
Comment 101 Staf Verhaegen 2020-02-25 09:28:15 GMT
>  Foundries are usually very touchy about the pads and allows only
>  the one they validates. So I assume that Staf will add the pads
>  himself with the ones that TSMC will supply.

Not fully true, foundries will tape anything that does not give DRC violations. You are fully on your own though if the chip does not perform as expected and you have deviated from the reference flow. Given that we use a non-qualified P&R tool we are on our own anyway ;).

I will have test open source IO cells on the test tape-out in May and if everything goes well these will be used for the libre-SOC prototype tape-out. But using TSMC ones is a back-up plan though.

So I will make the IO-ring and will provide the location of the pins with input and output connections to the core. This will be either as LEF or directly as Coriolis database. I will not take care of connecting the core to the IO cells though.
Comment 102 Staf Verhaegen 2020-02-25 09:33:29 GMT
Alternatively I can also just provided the pin location of signel IO cell and the IO-ring can still be built by the Coriolis scripts.
Comment 103 Jean-Paul Chaput 2020-02-25 09:39:14 GMT
(In reply to Staf Verhaegen from comment #102)
> Alternatively I can also just provided the pin location of signel IO cell
> and the IO-ring can still be built by the Coriolis scripts.

Could you provide us with phantoms of the I/O pads?
Just boxes with terminals at the periphery (the soldering pad,
the I/O ring pad and the one to/from the core) compatible with
nsxlib ?
Comment 104 Luke Kenneth Casson Leighton 2020-02-25 10:17:27 GMT
(In reply to Jean-Paul.Chaput from comment #100)
> Hello Luke,
> 
> I cannot access the git repository anymore. Either through ssh or https.

there's no http or https access for git (only the web "browsing" frontend): gitweb is running on the git port (9418).

ssh has been moved to port 922 (a habit i picked up 12 years ago
to stop script kiddies).

please try this and send the results back:

ssh -v -p922 gitolite3@git.libre-riscv.org

if it pauses with no action (at all) then you managed to try to log in
with an unauthorised user/pass combination (from the wrong machine, or
with the wrong ssh key).

this would result in the IP address being banned immediately (see below)


> Tried with libre-riscv and libre-soc also. 

libre-soc's not set up yet.


> And looked through the doc
> for any specific instructions but did not found any.

https://libre-riscv.org/HDL_workflow/

so you want:

git clone ssh://gitolite3@git.libre-riscv.org:922/REPONAME.git


however.... if you've made the mistake of trying to use a username/password
on ssh, the sheer volume of scripted and DDoS attacks against the servers
that i run has led me to get extremely draconian with fail2ban.

you get *one* chance to get an ssh-user/pass login (which you don't have,
because gitolite3 only accepts ssh keys), and fail2ban *WILL* block you
for at least 48 hours.

can you send me (off-list) the IP address that you're ssh'ing in from?
i will check the fail2ban logs and see if it's been added, then whitelist
it.
Comment 105 Luke Kenneth Casson Leighton 2020-02-25 10:18:57 GMT
(In reply to Jean-Paul.Chaput from comment #103)
> (In reply to Staf Verhaegen from comment #102)
> > Alternatively I can also just provided the pin location of signel IO cell
> > and the IO-ring can still be built by the Coriolis scripts.
> 
> Could you provide us with phantoms of the I/O pads?

ah, good word: phantoms.

that's what i wondered if we could use instead of signing NDAs.  "phantom"
Cell Libraries, just to be able to do layout without Foundry Cell
NDAs.
Comment 106 Staf Verhaegen 2020-02-25 10:26:19 GMT
(In reply to Jean-Paul.Chaput from comment #103)
> (In reply to Staf Verhaegen from comment #102)
> > Alternatively I can also just provided the pin location of signel IO cell
> > and the IO-ring can still be built by the Coriolis scripts.
> 
> Could you provide us with phantoms of the I/O pads?
> Just boxes with terminals at the periphery (the soldering pad,
> the I/O ring pad and the one to/from the core) compatible with
> nsxlib ?

That's actually what I meant; but first plan is to have open source IO cells and than you will get the Coriolis library with the full design of the cells.

> ah, good word: phantoms.

Actually the term used in the industry is abstract views...
Comment 107 Luke Kenneth Casson Leighton 2020-02-25 10:34:53 GMT
(In reply to Staf Verhaegen from comment #106)

> Actually the term used in the industry is abstract views...

ah excellent thank you i added that to
https://libre-riscv.org/3d_gpu/tutorial/?
Comment 108 Jean-Paul Chaput 2020-02-25 10:36:10 GMT
(In reply to Luke Kenneth Casson Leighton from comment #104)
> (In reply to Jean-Paul.Chaput from comment #100)
> > Hello Luke,
> > 
> > I cannot access the git repository anymore. Either through ssh or https.

> so you want:
> 
> git clone ssh://gitolite3@git.libre-riscv.org:922/REPONAME.git

OK, I'm stupid. This was actually written on the homepage.
Not enough caffeine yet.
Comment 109 Jean-Paul Chaput 2020-02-25 10:41:53 GMT
(In reply to Luke Kenneth Casson Leighton from comment #107)
> (In reply to Staf Verhaegen from comment #106)
> 
> > Actually the term used in the industry is abstract views...
> 
> ah excellent thank you i added that to
> https://libre-riscv.org/3d_gpu/tutorial/?

Yes, I heard phantoms in the old days when things where much
less formalized...

So anyway, we can use abstract views until the complete layouts
of free I/O pads are availables. Maybe in two flavors, one for
the free I/O and one for the TSMC's.

And I remember ST being reluctant to use custom made I/O pads...
Comment 110 Luke Kenneth Casson Leighton 2020-02-25 11:37:50 GMT
(In reply to Jean-Paul.Chaput from comment #108)

> > git clone ssh://gitolite3@git.libre-riscv.org:922/REPONAME.git
> 
> OK, I'm stupid. This was actually written on the homepage.
> Not enough caffeine yet.

doh :)

oh, Staf: if there is time, are you going to include a PLL in the test
ASIC in March? (RingOscillator)

jean-paul while you are looking at the ioring ck discrepancies
i am going to investigate the next phase: how to do hierarchical
auto-placement.

i don't believe we need to go the full manual placement route,
just to define (like an ioring) where the inputs/outputs are,
the outer box size, and then auto-place/route from there.

it looks like snx/phenitec06/doSnxCore.py *might* be exactly what
i am looking for (except it doesn't compile at the moment)

        - Clock Signal .......................................... .*ck.*|.*nck.*
        - Blockages ........................................... blockage[Nn]et.*
     o  Special Cells.
        - Pads .......................................................... .*_px$


[ERROR] Unable to load cell "snx_chip" (option "--cell=...")
  o  Cleaning up any previous run.
[ERROR] ClockTree: No cell loaded yet.
        Python stack trace:
        #0 in                  __init__() at .../install/lib/python2.7/dist-packages/crlcore/helpers/io.py:166
        #1 in                ScriptMain() at .../dist-packages/cumulus/plugins/ClockTreePlugin.py:94
        #2 in                ScriptMain() at /home/lkcl/alliance-check-toolkit/bin/doChip.py:195
        #3 in                  <module>() at /home/lkcl/alliance-check-toolkit/bin/doChip.py:328

[WARNING] No Cell loaded in the editor (yet), nothing done.
Katana.get: Argument type mismatch, should be:<ent> not <none>
mk/pr-coriolis.mk:83: recipe for target 'snx_chip_cts_r.ap' failed
make: [snx_chip_cts_r.ap] Error 1 (ignored)
Comment 111 Luke Kenneth Casson Leighton 2020-02-25 12:56:08 GMT
ok this was because ioring.py in the phenitec06 directory was not being noticed, i moved it to coriolis2/ and it was detected.  am trying to sort that out now
Comment 112 Staf Verhaegen 2020-02-25 13:47:31 GMT
> oh, Staf: if there is time, are you going to include a PLL in the test
> ASIC in March? (RingOscillator)

We already discussed this. There is no time for me to do it and if somebody else wants to do the design he needs to get through the TSMC NDA procedure which I also don't see feasible.

The 0.18um prototype will still run at a frequency where the clock can be provided externally without the need of a PLL on-chip.

Possible PLL development should be done in a separate project and getting the 0.18um libre-SOC prototype design finished without a PLL should IMHO get priority.
Comment 113 Luke Kenneth Casson Leighton 2020-02-25 14:13:10 GMT
(In reply to Staf Verhaegen from comment #112)
> > oh, Staf: if there is time, are you going to include a PLL in the test
> > ASIC in March? (RingOscillator)
> 
> We already discussed this. 

i'd forgotten, sorry.  also i hadn't been aware of the RingOscillator
at the time.

> Possible PLL development should be done in a separate project and getting
> the 0.18um libre-SOC prototype design finished without a PLL should IMHO get
> priority.

yes agreed.
Comment 114 Luke Kenneth Casson Leighton 2020-02-25 14:58:04 GMT
Created attachment 25 [details]
snx core block layout

oo, oo, really exciting, got it to produce some output and
added new pins to it on the SOUTH side - inst bus (16 pins)

tried to add them on NORTH however the router said "nope, NET
not close enough".

wooow i'm so haaappy :)
Comment 115 Jean-Paul Chaput 2020-02-25 16:29:55 GMT
(In reply to Jean-Paul.Chaput from comment #98)
> (In reply to Luke Kenneth Casson Leighton from comment #95)
> > (In reply to Luke Kenneth Casson Leighton from comment #94)

> > ck of 'b_2' is NOT connected to ck of 'a_3' in netlist 2
> > through signal mbk_sig35 but to signal mbk_sig62
> > 
> > ck of 'b_3' is NOT connected to ck of 'a_3' in netlist 2
> > through signal mbk_sig35 but to signal mbk_sig45
> > 
> > 
> > however "make view" actually works which is a really nice surprise.
> > 
> > it is near-identical to the adder example, both the Makefile and ioring.py
> > any clues?

Got it. In your coriolis2/settings.py, allow a more wide range
of clock signal names:

env.setCLOCK( 'clk|ck|cki' )

The clock signal/terminal inside the I/O pads is named "ck" and was not
recognized as a clock, so it wasn't routed, hence all the clock
of the pads (in the ring) got disconnected.
Comment 116 Jean-Paul Chaput 2020-02-25 16:38:31 GMT
(In reply to Luke Kenneth Casson Leighton from comment #111)
> ok this was because ioring.py in the phenitec06 directory was not being
> noticed, i moved it to coriolis2/ and it was detected.  am trying to sort
> that out now

It is a by my fault here.

I recently changed the configuration system of Coriolis to be
fully "Python" and used through import statements (good suggestion
from Staf). In the process I did "unhide" the configuration
directory inside project which changed from ".coriolis2" to
"coriolis2". Then I did have to adapt alliance-check-toolkit
to the new scheme, but as it is tedious I didn't do it on all
examples, only the ones I most used. So, if you see a
"coriolis2/" directory, the example is up to date (and the
previous hidden directory has been moved to "deprecated.coriolis2").
Or there is still a ".coriolis2" and in that case you have to
migrate before the Makefile / design flow works again.
And to make things more obscure, I didn't had time yet to
update the doc...
Comment 117 Jean-Paul Chaput 2020-02-25 16:40:21 GMT
(In reply to Luke Kenneth Casson Leighton from comment #114)
> Created attachment 25 [details]
> snx core block layout
> 
> oo, oo, really exciting, got it to produce some output and
> added new pins to it on the SOUTH side - inst bus (16 pins)
> 
> tried to add them on NORTH however the router said "nope, NET
> not close enough".

  Yes, the pads must be more or less in front of the side of the
  chip, not too much in the corners.

> wooow i'm so haaappy :)

  Now, at least, you can make nice postcards ;-)
Comment 118 Luke Kenneth Casson Leighton 2020-02-25 17:54:58 GMT
rrright.  the next bit: i need, is to do an individual cell
(specifying the block size) - i have the design doing that,
creating alu16.ap

what i need after that is to do snx_scan.ap... *but using
the alu16.ap just created*

i think i may have to create an experiment5 which uses alu_hier.py,
because there are only two extra cells, there: add and sub.
Comment 119 Luke Kenneth Casson Leighton 2020-02-25 17:58:21 GMT
(In reply to Jean-Paul.Chaput from comment #115)

> Got it. In your coriolis2/settings.py, allow a more wide range
> of clock signal names:
> 
> env.setCLOCK( 'clk|ck|cki' )
> 
> The clock signal/terminal inside the I/O pads is named "ck" and was not
> recognized as a clock, so it wasn't routed, hence all the clock
> of the pads (in the ring) got disconnected.

ee!  it worked!
Comment 120 Luke Kenneth Casson Leighton 2020-02-25 18:00:32 GMT
(In reply to Jean-Paul.Chaput from comment #116)

> "coriolis2/" directory, the example is up to date (and the
> previous hidden directory has been moved to "deprecated.coriolis2").
> Or there is still a ".coriolis2" and in that case you have to
> migrate before the Makefile / design flow works again.
> And to make things more obscure, I didn't had time yet to
> update the doc...

loovelyyy yes i noticed you have to pass an argument into
katana.runGlobalRouter(xxx) now.  i passed in 0 and it worked
fine.
Comment 121 Luke Kenneth Casson Leighton 2020-02-25 18:51:59 GMT
ok so i cut/paste the code which creates snx_core.ap and got it to
create a *sub* cell... add.ap.  size: 2000 x 800.

then do the same thing for sub.ap

all good so far.

then in the same code, i cut/paste the same section, except of course
creating a new cell "alu_hier" which loads alu_hier.vst, and
setting up the inputs and outputs, and also a new size: 3000 x 2000.

then, exact same thing with alu_hier.vst (Logical on addCell) and
we get this:

     + ./alu_hier.vst [models]
        + /home/lkcl/alliance/install/cells/sxlib/nmx2_x1.vbe [behavioral]
        + /home/lkcl/alliance/install/cells/sxlib/nmx2_x1.ap
        + /home/lkcl/alliance/install/cells/sxlib/sff1_x4.vbe [behavioral]
        + /home/lkcl/alliance/install/cells/sxlib/sff1_x4.ap
     + ./alu_hier.vst [structural]
  o  Creating ToolEngine<Etesian> for Cell <alu_hier>
     - Initial memory .................................................. 307.7Mb
  o  Configuration of ToolEngine<Etesian> for Cell <alu_hier>
     - Cell Gauge ...................................................... <sxlib>
     - Place Effort .......................................................... 2
     - Update Conf ........................................................... 2
     - Spreading Conf ........................................................ 1
     - Routing driven .................................................... false
     - Space Margin ........................................................ 20%
     - Aspect Ratio ....................................................... 100%
     - Bloat model .................................................... disabled
  o  Converting <alu_hier> into Coloquinte.
     - H-pitch .............................................................. 5l
     - V-pitch .............................................................. 5l
     - Converting 6191 instances
Traceback (most recent call last):
  File "doAlu16.py", line 344, in <module>
    success      = alu_hier()
  File "doAlu16.py", line 320, in alu_hier
    etesian.place()
hurricane.HurricaneError: [ERROR] EtesianEngine::toColoquinte(): Non-leaf instance "subckt_49_sub" of "sub" has an abutment box but is *not* placed.

so... errr... what's going on?  the whole point is: i want that subckt_49_sub
(which is the cell created previously, to be treated as "something to place".

how do we get both add.ap and sub.ap to be added to the list of cells
to be "placed"?
Comment 123 Luke Kenneth Casson Leighton 2020-02-25 19:02:24 GMT
hmm, i just took a look at the nsxlib Makefile.

effectively, what is needed is: add.ap, sub.ap, these go into a
Cell Library!

create a CATAL file:
add C
sub C

create an actual alu_hier.lib

then add that to the ap.environment as an actual Cell Library!

of course, this is a slightly ridiculous way to do it, however it is
in effect actually what is needed.

there must be a better way - one that's simpler, that will work
hierarchically, because i would be very surprised if the above
approach (Cell Library) would work hierarchically, because the
alu_hier.lib would be made from nsxlib cells...

i don't know.
Comment 124 Jean-Paul Chaput 2020-02-25 21:11:03 GMT
(In reply to Luke Kenneth Casson Leighton from comment #121)

> Traceback (most recent call last):
>   File "doAlu16.py", line 344, in <module>
>     success      = alu_hier()
>   File "doAlu16.py", line 320, in alu_hier
>     etesian.place()
> hurricane.HurricaneError: [ERROR] EtesianEngine::toColoquinte(): Non-leaf
> instance "subckt_49_sub" of "sub" has an abutment box but is *not* placed.
> 
> so... errr... what's going on?  the whole point is: i want that subckt_49_sub
> (which is the cell created previously, to be treated as "something to place".
> 
> how do we get both add.ap and sub.ap to be added to the list of cells
> to be "placed"?

  Not sure I understand what you want to do here. Do you:

  1. Want to create placed and routed blocks of "add" & "sub", then
     place them to build the whole core.

  2. Create a hierarchical design with netlists "add" and "sub", but
     to have the whole core placed & routed in one shot "as it was flat"
     (Coriolis is good at that).

  The option 2. is readily doable, and in that case the message you have
  means that you must remove any ".ap" file *before* placing (but keep
  the ".vst").

  For option 1., the placer can only handle standard cells, not blocks
  (or "macros"). In that case you have to manually build the placement
  of the blocks (floorplaning). The router can manage block routing.
Comment 125 Luke Kenneth Casson Leighton 2020-02-25 22:43:09 GMT
(In reply to Jean-Paul.Chaput from comment #124)

>   Not sure I understand what you want to do here. Do you:
> 
>   1. Want to create placed and routed blocks of "add" & "sub", then
>      place them to build the whole core.

3. create P&R'd blocks of *type* add.ap and aub.ap which in the previous function had just been created (from add.vst and sub.vst) AS IF they were in {insert cell library name}.lib

in other words what in effect i need is the exact same effect as if add.ap/vst and sub.ap/vst had been added to a Cell Library...

... now i want alu_hier.ap to P&R *using* the two blocks...

... and then alu_hier.ap to be written...

*and then we do it all over again at the next level of the hierarchy*.

a cell library made from a cell library nade from a cell library and finally only get to the top level floorplan.

so for example:

* the FP ALU is made from a MUL unit, DIV unit, etc.
* the IEEE754 MUL ALU is made from 5 blocks (denormalise, mul stage 0, mul stage 1, normalise, packing)
* denormalise is made from 3 blocks, including 2 SHIFTER blocks.
* SHIFTER block is made from...

etc etc.  layer layer layer layer.

every layer i want to be automatically P&R'd, to be used in the layer above.

>   For option 1., the placer can only handle standard cells, not blocks
>   (or "macros").

which is why i wondered, can add.ap/vst and sub.ap/vst be added *to* a Cell Library, and then used in the next level?

repeat, repeat, repeat 

if add.ap/vst and sub.ap/vst can be added to a fake (temporary) Cell Library, even if it means using a Makefile to do so, then adding that new Cell Library to the ap Environment, we have a way to do what i want.


> In that case you have to manually build the placement
>   of the blocks (floorplaning).

that would be impractical because it means manual placement at every level of hierarchy.

we have maybe 20 levels of hierarchy, and maybe... 1000 or more blocks to place across those 20 levels.

doing all of them manually, even placement, in hierarchical floorplans, one level at a time, is not practical.

what *is* practical is to create increasingly large blocks, P&R'd automatically, moving on to the next level and finally ONLY doing manual placement at the very last top level floorplan.


> The router can manage block routing.

ok i noticed that the router might be able to reposition blocks. (is that correct?)

so another way would be to build the placement in fake (dummy) locations, then ask the router to move them to better locations.

is that possible?
Comment 126 Jean-Paul Chaput 2020-02-26 10:03:24 GMT
(In reply to Luke Kenneth Casson Leighton from comment #125)
> (In reply to Jean-Paul.Chaput from comment #124)

OK I see your vision of hierarchical P&R.

* The bottom up assembly approach is completely legitimate for the
  netlists.

* For the layout, less so. I reason along the same lines as Staf, but
  maybe less radically (or I'm lagging behind).

  * If you break your design in too tiny blocks you will prevent
    the placer to perform some placement optimization. So, in
    my opinion you should have one or two level of hierarchy in
    the layout placement.
      The top level floorplan (full chip) and maybe a supplemental
    one in big top-level sub-blocks.
      Staf recommend to do it completely flat (his arguments makes
    perfect sense, but I need to see it by myself).

  * Another argument is that I'm not sure that Coriolis will handle
    well (or in reasonnable time) blocks of more than 100K gates.
    So your approach is important at least as "backup plan".

  * Still another point to consider is how much "block layout" reuse
    do you have ? Sub-block layout may be interesting if you have
    one block used multiple time that can have the same form factor
    and pin positions.

Anyway there is always a lot of tradeoff in that stage of a design.
Ideally you should make a check for all three of them, at least on
significant part of the design.

Now, for the technical part of recursively building layout with
Coriolis:

* The placer can only handle cells. So only the "leaf" netlists can
  be done by him.

* Do not put blocks in libraries, that would confuse some Alliance
  tools that would see them as "terminal black boxes".

* To assemble placed layout you must write Python scripts. They are
  not complicated once you know what to do, but still, it implies
  that *you* know beforehand how the blocks must be placeds.
  This is *automated* *manual* floorplan.
    You can even develop a Python program to furhter ease the
  automation.

There are still questions left open:

* Should we place the whole chip (whatever the method) then route
  in one go. This may avoid the creation of channel routing.

* Or should we route each sub-block as we go up. Which may allow to
  create guard ring, but will need the creation of routing channels,
  and place the external terminals of the blocks.

All of the above are difficult questions, even so because the answer
may emerge only after starting to implement.
Comment 127 Staf Verhaegen 2020-02-26 11:37:25 GMT
(In reply to Jean-Paul.Chaput from comment #126)
> 
> * For the layout, less so. I reason along the same lines as Staf, but
>   maybe less radically (or I'm lagging behind).
> 
>   * If you break your design in too tiny blocks you will prevent
>     the placer to perform some placement optimization. So, in
>     my opinion you should have one or two level of hierarchy in
>     the layout placement.

Placing the standard cells itself can be done quite fine grained without the need for open room (extra open room is only added to get the pin density low enough so the router can complete). If you want to place higher level blocks of different sizes and shapes you seriously complicate the problem. Given that the normal placement of standard cells is already a difficult NP problem you don't want to add such extra complication to the algorithm. Existing algorithms will also no be able to cope with it either (proprietary placers allow to have multi-row macros for for example 4-bit registers).

And it is not only optimization in placing that you block but also optimization in synthesis. If you have for example an inverter on the output of such a low level block that is connected to the input of another block with an inverter on it these two inverters will be removed during synthesis after flattening the design. This can be generalized in that you block synthesis optimization in each path that goes over the block boundary if you don't flatten.

>       The top level floorplan (full chip) and maybe a supplemental
>     one in big top-level sub-blocks.
>       Staf recommend to do it completely flat (his arguments makes
>     perfect sense, but I need to see it by myself).

I can agree that if you make a multi-core chip it may make sense to do the P&R on one core and manually place the different instances of the cores in the floorplan.
But the use case I am focusing on is people that develop their design using HDL like nmigen or SpinalHDL on a FPGA and then order an ASIC for that. For them the ASIC compilation should be a fully automated process and they should not have to take care of floorplanning. In such a setting flattening seems the most efficient approach and if the compiler wants to keep hierarchy it should be able to determine itself where to keep hierarchy on its own.
I see it the same way as the optimization flags for compilers; most people will stick to use -O[0-3] and only for very specific cases one fine-tunes the optimization flags to get optimal performance or even use some hand coded assembly somewhere.
Comment 128 Luke Kenneth Casson Leighton 2020-02-26 11:54:23 GMT
(In reply to Jean-Paul.Chaput from comment #126)
> (In reply to Luke Kenneth Casson Leighton from comment #125)
> > (In reply to Jean-Paul.Chaput from comment #124)
> 
> OK I see your vision of hierarchical P&R.

hurrah :) as long as the blocks are flattened and can be treated as a "new cell in a cell library even though they are 20000 gates" it should work.


>   * If you break your design in too tiny blocks you will prevent
>     the placer to perform some placement optimization.

i do not intend to go mad, at a few levels below the leaf nodes let the autorouter do its thing.

however i know for example that the pipeline stages for the IEEE754 FPU functions such as RSQRT are very clearly and obviously "input on one side" and "output on the other" in a chain of eight stages long and therefore should be laid out in a forward directed graph only.



> So, in
>     my opinion you should have one or two level of hierarchy in
>     the layout placement.

that sounds sensible.

>       The top level floorplan (full chip) and maybe a supplemental
>     one in big top-level sub-blocks.
>       Staf recommend to do it completely flat (his arguments makes
>     perfect sense, but I need to see it by myself).

500,000 gates.  i have seen how much time the placer takes based on the size of the block. it is at least O(N^2) time, and i believe it may take several weeks or months to complete.

i am a little concerned even trying the floorplan size, we may have to place entirely manually and then do autoroute.

>   * Another argument is that I'm not sure that Coriolis will handle
>     well (or in reasonnable time) blocks of more than 100K gates.

yes, i agree, based on the time i have seen even the difference between 3000x3000 and 5000x5000 it is concerning how much of a slowdown.

if the "granularity" of placement (the grid) can be made more coarse then maybe it can be speeded up when the blocks are very large.

>     So your approach is important at least as "backup plan".
> 
>   * Still another point to consider is how much "block layout" reuse
>     do you have ?

because we have SIMD engines, actually quite a lot.

as a GPU we need multiple FPUs of the same type (multiple FPMUL units, multiple LD/ST units etc)

> Sub-block layout may be interesting if you have
>     one block used multiple time that can have the same form factor
>     and pin positions.

yes, and so the idea of adding the FPU blocks *as* a Cell Library, even though they are Monsters at maybe 40,000 gates each, starts to make sense.

> 
> Anyway there is always a lot of tradeoff in that stage of a design.
> Ideally you should make a check for all three of them, at least on
> significant part of the design.
> 
> Now, for the technical part of recursively building layout with
> Coriolis:
> 
> * The placer can only handle cells. So only the "leaf" netlists can
>   be done by him.

drat.  i was hoping to be able to treat the blocks as "cells" even though they are 50,000 gates.

> 
> * Do not put blocks in libraries, that would confuse some Alliance
>   tools that would see them as "terminal black boxes".

interesting.  ok.

> * To assemble placed layout you must write Python scripts. They are
>   not complicated once you know what to do, but still, it implies
>   that *you* know beforehand how the blocks must be placeds.
>   This is *automated* *manual* floorplan.
>     You can even develop a Python program to furhter ease the
>   automation.

yes, i saw ringoscillator.py i liked the approach.

i think, if we do not go too deep with the manual hierarchy, it will be ok.

particularly because, in the FPU pipelines, they are very very easy

* input on NORTH (or WEST)
* output on SOUTH (or EAST)
* deliberately make them the same height (or width)
* size of each pipeline stage tells you exactly how to lay them out.
* leave a gap
* router connects output of one to input of next.

done.



> 
> There are still questions left open:

i have some too, below
 
> * Should we place the whole chip (whatever the method) then route
>   in one go. This may avoid the creation of channel routing.

what is "channel routing"?

the FPU pipelines, we know for a fact, they are (very large but) completely isolated, connected only to each other, and, at the end, you get one massive block with inputs on one side and outputs on the other, no additional inputs or outputs from different stages.

i really should send you one of the FPU .il files so you can walk it with yosys, or do a video.

my concern with doing single pass global routing, even that may take a huge amount of time.

> * Or should we route each sub-block as we go up. Which may allow to
>   create guard ring, but will need the creation of routing channels,
>   and place the external terminals of the blocks.

ah i think i deduce what channel routing is, you mean if there are groups of lines such as a data bus, you want them to stay together and only go a certain way?

or, you mean, when creating a guard ring of VIAs you have to leave some gap so the routing can get through to the edge of the block?

 
> All of the above are difficult questions, even so because the answer
> may emerge only after starting to implement.

it is fine, jean-paul, i have done PCB layout for 8 years, now, including creating libraries of library parts.

and i am a python programmer who has done c/c++ modules. actually, a python program that *generated* c++ modules based on IDL files.

additional questions:

1. at the leaf nodes, is it possible to tell the auto stage, "i want a fixed height but you must keep the width as small as possible"?

2. can we specify that inputs are definitely to go on NORTH and outputs definitely on SOUTH then the auto stage does layout which puts cells in between, 100% guaranteed to succeed?

1 is because of the FPU pipelines, i would like each stage to be the same height, so that when connected to each other they csn be manually placed in a chain, all of the same height.

yes we could do an iterative approach, "does this width work FAIL does this width work FAIL" it would be nicer not to do that!

or, to have a way to find out in advance, before routing? it should be possible to estimate the size of the block even before Placement, right?

2. is again the same thing, output from previous block we *know* goes directly and straight to next block.  if the data has to route all the way round, this is silly.

the reason i ask is because in the experiments yesterday, the P&R refused to complete, when i told it "put input on NORTH and output on SOUTH".
Comment 129 Luke Kenneth Casson Leighton 2020-02-26 12:30:08 GMT
(In reply to Staf Verhaegen from comment #127)

> Existing algorithms will also no be able to cope with it either (proprietary
> placers allow to have multi-row macros for for example 4-bit registers).

interesting.  so a 64 bit register latch would be done as a batch of 16 4-bit Standard Library Cells because clearly those go together.

> 
> And it is not only optimization in placing that you block but also
> optimization in synthesis. If you have for example an inverter on the output
> of such a low level block that is connected to the input of another block
> with an inverter on it these two inverters will be removed during synthesis
> after flattening the design. 

understood.

we may just have to eat inverters, then, in some high level cases.

in the crossover (we were writing at the same time :) ) i explained that it is unlikely that we will go all the way manual to the leaf nodes.

so each FPU pipeline stage (5000 gates maybe) we will do a block on that.

these blocks we *know* in advance, they are *only* connected by register latches.

no inverters or other opportunities for synthesis optimisation.

not even buffers needed [actually not true because there are global "cancellation" lines needed, which go to every stage in the pipeline, saying "please discard result with ID 0b01101, right now"]


> This can be generalized in that you block
> synthesis optimization in each path that goes over the block boundary if you
> don't flatten.

500,000 gates, flattened, it's just not going to work.  you can check for yourself by increasing the chip size block in, say, ao68000, to 10,000 x 10,000

or one of the bench tests with an ioring.py change the ARM chip size to 20000 x 20000 for example.

the completion time will jump from 5 minutes to about... 2 hours or more, each place of a Standard Cell taking *minutes*

fortunately there are known clear boundaries, and the lower levels we can flatten, the top levels then do not matter so much.


> I can agree that if you make a multi-core chip it may make sense to do the
> P&R on one core and manually place the different instances of the cores in
> the floorplan.

the IEEE754 FPMUL we need 4 of them.

that is around 40,000 gates *just one FPMUL*!

it is a monstrous cascade Wallace Tree (we actually need to replace with the Dadda algorithm).

likewise, FPDIV/SQRT/RSQRT is an 8 stage pipeline, and we need 4 of those, they are all identical.

the LD and ST units, 4 of those (possibly 8).

for VPU processing we also need bit manipulation, and we may also have to add multiple DCT blocks as part of the ALU.

this is a *massive* chip with a lot of regular blocks, to meet the expected performance levels of 3D and Video processing.


> But the use case I am focusing on is people that develop their design using
> HDL like nmigen or SpinalHDL on a FPGA and then order an ASIC for that. For
> them the ASIC compilation should be a fully automated process and they
> should not have to take care of floorplanning.

if the entire chip in such designs is even as high as 100,000 gates, like jeanpaul said, it would take a long time but would still be fine.

this design is a massive regular repetition of ALU and SIMD resources.  these computation resources far exceed the size of the main processor core and even the L1 caches.

therefore doing them as repeatable blocks makes sense to me.
Comment 130 Staf Verhaegen 2020-02-26 12:53:26 GMT
(In reply to Luke Kenneth Casson Leighton from comment #129)
> these blocks we *know* in advance, they are *only* connected by register
> latches.

If we are naming things anyway this is called a datapath in the industry.

Problem I see with using datapath layout is that typically the input of the datapath comes from the register file and also the output has to go to the register file. So if you go always left to right one of the sides will be far away from the register file. For smaller technology nodes the capacitive load of these long paths will be a killer for performance.
This problem is more pronounced if you have different functional blocks where for all the blocks the input and output is coming from and going to the register file.

Using an analytic placer will naturally get both the input and outputs close the register file and move the middle of the path further away minimizing extra delay from the interconnects.
Comment 131 Luke Kenneth Casson Leighton 2020-02-26 13:15:18 GMT
(In reply to Staf Verhaegen from comment #130)
> (In reply to Luke Kenneth Casson Leighton from comment #129)
> > these blocks we *know* in advance, they are *only* connected by register
> > latches.
> 
> If we are naming things anyway this is called a datapath in the industry.

ah ha! another new term for the wiki :)

> Problem I see with using datapath layout is that typically the input of the
> datapath comes from the register file and also the output has to go to the
> register file. 

ah.  right.  yes you are correct.  we do not want *all* datapath layout to be NORTH-as-input, SOUTH-as-output.

ok so in the 6600 out-of-order design, the reads go into "Function Unit" latches (Reservation Stations if you are familiar with the Tomasulo Algorithm terminology).

the register data waits in those FU latches until all of them are available: this may not be immediately because, even if there are no Dependency Hazards, there may simply not be enough Register-File Read-Port bus bandwidth at that exact moment in order to get the data needed for that Function Unit to fill all of its operand latches.

once ready, the FU may proceed to ask one of the ALUs for a time-slot.  at that point it is "go".

all of this, it is still "input on NORTH, output on SOUTH".  or, more accurately, "data input and latch ACK output on NORTH, data output and latch WAIT input on SOUTH".

then on the other side of the ALUs, there is *another* latch, this time capturing the output.

these outputs, there is a "Register Bypass Bus" which can feed *back* into the Function Unit latches, *and* there is a multi-way Register-File Write-Port Bus.

so it's nowhere near as straightforward as a "standard in-order single-core pipeline design": you can see that there are several circular datapaths between the blocks.


> So if you go always left to right one of the sides will be
> far away from the register file. For smaller technology nodes the capacitive
> load of these long paths will be a killer for performance.
> This problem is more pronounced if you have different functional blocks
> where for all the blocks the input and output is coming from and going to
> the register file.
> 
> Using an analytic placer will naturally get both the input and outputs close
> the register file and move the middle of the path further away minimizing
> extra delay from the interconnects.

that would be really nice to have.  because of the circular nature of the design, we may just have to see how it goes.

the Function Unit input latches and the ALU Result output latches need to be as close as possible to the register file, via the buses, however the sheer size of the ALU blocks themselves that are *in between* the input and output latches is going to make that quite challenging.

one thought there is to split the ALU pipelines into half (and making them particularly narrow), then turning the data around half way along, routing it *back* through the second half of the pipeline stages, so that the result data arrives *back* as close to its starting point as possible.

however we are still looking at a massive data bus.  "Common Data Bus" in Tomasulo Algorithm terminology.
Comment 132 Staf Verhaegen 2020-02-26 15:29:52 GMT
(In reply to Luke Kenneth Casson Leighton from comment #131)
> (In reply to Staf Verhaegen from comment #130)
> > Using an analytic placer will naturally get both the input and outputs close
> > the register file and move the middle of the path further away minimizing
> > extra delay from the interconnects.
> 
> that would be really nice to have.  because of the circular nature of the
> design, we may just have to see how it goes.

The Coriolis placer is an analytic placer...
and option 2 of what Jean-Paul proposed is actually doing what I describe...
:)
Comment 133 Staf Verhaegen 2020-02-26 15:34:35 GMT
(In reply to Luke Kenneth Casson Leighton from comment #129)
> (In reply to Staf Verhaegen from comment #127)
> 
> > But the use case I am focusing on is people that develop their design using
> > HDL like nmigen or SpinalHDL on a FPGA and then order an ASIC for that. For
> > them the ASIC compilation should be a fully automated process and they
> > should not have to take care of floorplanning.
> 
> if the entire chip in such designs is even as high as 100,000 gates, like
> jeanpaul said, it would take a long time but would still be fine.

In industry target of run length is one night; e.g. new P&R is started before leaving and results ready the morning next day.
Currently in Coriolis the places and router are single-threaded so there should be room for improvement there.
Comment 134 Luke Kenneth Casson Leighton 2020-02-26 16:03:16 GMT
(In reply to Staf Verhaegen from comment #132)

> The Coriolis placer is an analytic placer...
> and option 2 of what Jean-Paul proposed is actually doing what I describe...
> :)

hooray! :)

(In reply to Staf Verhaegen from comment #133)

> In industry target of run length is one night; e.g. new P&R is started
> before leaving and results ready the morning next day.
> Currently in Coriolis the places and router are single-threaded so there
> should be room for improvement there.

this was another reason i like Makefile, and why i wanted things divided
into lower-level blocks.  make -j16.  sorted.
Comment 135 Luke Kenneth Casson Leighton 2020-02-26 21:21:10 GMT
found cellsArea.py.
found computeAbutmentBox.
Comment 136 Luke Kenneth Casson Leighton 2020-02-27 19:02:16 GMT
Created attachment 28 [details]
cgt screenshot

ah HA!  i got the experiment5 to "work", by deliberately making the
abutment box 5 pixels larger (higher) than the "recommended" size
from computeAbutmentBox().

ta-daaa

i was able to get this to "work" by noticing that the cell locations
are expected to be 50 each.  by having a border of 5 pixels *higher*
than that, this space is entirely "free" and so the router is okay
to lay the tracks in it.

it would be good i think to test if this works on EAST, WEST and SOUTH
as well.

however to test on SOUTH, the location that the "grid" (50 x 10 divisions)
is set to must be offset by 5 pixels from the bottom.

likewise, to test on WEST, the (Master?) grid must be offset by 5 pixels from
the left.

any clues?
Comment 137 Luke Kenneth Casson Leighton 2020-02-27 22:38:15 GMT
okaay i have managed to "place" add.ap and sub.ap onto a blank alu_hier.ap
and got it to route.

etesian.place() complained "feed_0 already exists" feed_1 .... thousands
of complaints.

*once* i managed to get everything to route, it was however not reproducible.

so i have switched off etesian.place() for now - obviously the remaining
cells (Muxes etc.) are not placed or routed.

next question: how to get those extra cells placed?
Comment 138 Jean-Paul Chaput 2020-02-27 23:11:32 GMT
(In reply to Luke Kenneth Casson Leighton from comment #137)
> okaay i have managed to "place" add.ap and sub.ap onto a blank alu_hier.ap
> and got it to route.
> 
> etesian.place() complained "feed_0 already exists" feed_1 .... thousands
> of complaints.

  This typically happens when you try to place again a block.
  Feed cell are used by the placer after placing normal cells to fill
  the free space. They are named "feed_XXXX".

> *once* i managed to get everything to route, it was however not reproducible.

  To be reproductible, be sure to erase any vst/ap file before each run.
  In your case, ap should be sufficient.

> so i have switched off etesian.place() for now - obviously the remaining
> cells (Muxes etc.) are not placed or routed.
> 
> next question: how to get those extra cells placed?

  I will look into it this weekend !
Comment 139 Luke Kenneth Casson Leighton 2020-02-28 10:58:17 GMT
latest progress jean-paul, the add.ap is reasonable, i am trying now to add a via ring, after the katana global route.

the vias are fine however the tracks are not, they are deleted including VDD.

i will try adding them outside of the global routed area next
Comment 140 Luke Kenneth Casson Leighton 2020-02-28 11:16:44 GMT
nope still gets deleted. no idea why.
Comment 141 Jean-Paul Chaput 2020-02-28 16:04:15 GMT
(In reply to Luke Kenneth Casson Leighton from comment #140)
> nope still gets deleted. no idea why.

Promise, il will be all debugged this weekend.

I'm impressed by how fast your learn to use Coriolis!

By the way, it has been evoked somewhere the need for a PLL.
It happens that our lab have a strong experience about PLL,
one of our researcher is specialized in that topic.

I can contact him for the next step of the project.

He is currently building MEMS (energy harvesting) with Coriolis...
Comment 142 Luke Kenneth Casson Leighton 2020-02-28 17:26:07 GMT
(In reply to Jean-Paul.Chaput from comment #141)
> (In reply to Luke Kenneth Casson Leighton from comment #140)
> > nope still gets deleted. no idea why.
> 
> Promise, il will be all debugged this weekend.

ahh, appreciated. i will then stress-test it with some more variations that i did not commit.

> I'm impressed by how fast your learn to use Coriolis!

i _am_ 50, now :) and i stopped adding to my cv about 7 years ago when it got to 14 pages

> By the way, it has been evoked somewhere the need for a PLL.
> It happens that our lab have a strong experience about PLL,
> one of our researcher is specialized in that topic.

ahh gooood.

we can if he wants to put in an application for NLNet funding.

> I can contact him for the next step of the project.
> 
> He is currently building MEMS (energy harvesting) with Coriolis...

nice.
Comment 143 Luke Kenneth Casson Leighton 2020-02-28 18:47:04 GMT
ok so whatever was going on in doAlu16.py, where the tracks were not
being created (but VIAs were), i cut/paste RingOscillator.py, then
dropped add.ap creation *into* that base, rather than try it the other
way round, and it's worked.

so i now have a VDD / VSS "ring" around add.ap

in the previous experiment, where add.ap and sub.ap was dropped into alu_hier
Cell, the auto-router would connect up a[0..15], b[0..15] and o[0..15] into
the *MIDDLE* of add.ap and sub.ap.

how can that be prevented?  how do you say, "i want the autorouter ONLY
to connect to the input areas defined at the edge"?

(just like a Standard Cell in other words.  really, this is a lot easier
to do by creating a non-standard Cell Library, putting add.ap and sub.ap
into it)
Comment 144 Staf Verhaegen 2020-02-28 19:15:29 GMT
(In reply to Luke Kenneth Casson Leighton from comment #142)
> (In reply to Jean-Paul.Chaput from comment #141)
> >
> > By the way, it has been evoked somewhere the need for a PLL.
> > It happens that our lab have a strong experience about PLL,
> > one of our researcher is specialized in that topic.
> 
> ahh gooood.
> 
> we can if he wants to put in an application for NLNet funding.

I propose to move that discussion to separate bug thread. I am planning to have a 0.35um tape-out in July. There should be room for adding PLL there.
Comment 145 Luke Kenneth Casson Leighton 2020-02-28 20:21:49 GMT
(In reply to Staf Verhaegen from comment #144)

> I propose to move that discussion to separate bug thread. I am planning to
> have a 0.35um tape-out in July. There should be room for adding PLL there.

http://bugs.libre-riscv.org/show_bug.cgi?id=155
Comment 146 Luke Kenneth Casson Leighton 2020-03-01 22:57:03 GMT
okaaay, jean-paul, are you ready? i did it... :)

so, here's the steps, you can see in (badly-named) ringoscillator.py
in experiment5

* do a full etesian-and-katana place-and-route on add.vst to create add.ap

* likewise for sub.ap

* do a logical load/save on alu_hier.vst and save alu_hier.ap

* take a **COPY** of alu_hier.ap, called alu_hier_altered.ap and REPLACE
  the word "alu_hier" with "alu_hier_altered".

* take a **MANUAL** copy of alu_hier.vst (because i couldn't work out
  how to use alliance to do the following), and **REMOVE** subckt_48_add
  and subctk_49_sub from the VHDL, and save the results to alu_hier_altered.vst

* load alu_hier_altered.ap/vst and run Etesian place().  save.

* take a **COPY** of alu_hier_altered.ap, REPLACE the word "alu_hier_altered" 
  with "alu_hier_altered2" and save as alu_hier_altered2.ap

* take a copy of the **ORIGINAL** alu_hier.**VST** file, replace the word
  "alu_hier" with "alu_hier_altered2", and save as alu_hier_altered2.**VST**

* load alu_hier_altered2.ap/vst and run a *MANUAL* place on subckt_48_add
  and subckt_49_sub.  save the result

* load alu_hier_altered2.ap/vst and run the katana global router

ta-daaaa

*this* does what i want, in full, and successfully.

why does it work?

* the Etesian.place() will only place "all items" rather than
  "some selected items"

* therefore, to "solve" that, i *REMOVED* the two blocks (subckt_48_add
  and subckt_49_sub) from the *actual VHDL* file, such that Etesian.place()
  did not even know that they exist.

* then, by *reassociating* the resultant (successfully-placed) .ap file with
  a VHDL file that *does* know about the add and sub block, i was able to do
  the *manual* placement.

* also, as an added bonus, the global router recognised that there were
  tracks in the alu_hier.ap which had not been routed yet, and successfully
  connected them.  which was coincidentally exactly what i wanted.

so!... :)

i had to do the file-copying because AllianceFramework._catalog is a global.
there is unfortunately no detection of file-modification, so in a previous
iteration of the code, when i overwrote alu_hier.ap (copying alu_hier_altered
over the top of it), the catalog in crlcore/src/ccore/AllianceFramework.cpp
*did not notice*, and went:

"oh i wrote that out for you once already, you called
 AllianceFramework.getCell() again? i'm sorry, i am going
 to give you the *OLD* version because it's in _catalog".

so, a stat() on the vst and ap files would be a good idea, here, to
check if they've been modified.

secondly: the way to avoid this entirely would be to have a way to
specify, to Etesian.place(), that certain items are simply *not* to
be included for placement.

a more extensive version of that would be to pass in a list of
items that *are* to be placed.

but, for now, success!  it's a mess, but it works.
Comment 147 Jean-Paul Chaput 2020-03-01 23:24:01 GMT
(In reply to Luke Kenneth Casson Leighton from comment #146)

> but, for now, success!  it's a mess, but it works.

Again, I bow to your tenacity...

You did it right to circumvent the problem.

I did correct the 5 extra lambda problem on the north side of the blocks
and the missing geometry support. Not commited yet.

I did also see what cause problem in the hierarchy exploration that
cause Etesian to take all instances and not stop at selected levels
of hierarchy. I am looking for a clean solution. I hope it is done
tomorrow.
Comment 148 Luke Kenneth Casson Leighton 2020-03-02 00:09:48 GMT
(In reply to Jean-Paul.Chaput from comment #147)
> (In reply to Luke Kenneth Casson Leighton from comment #146)
> 
> > but, for now, success!  it's a mess, but it works.
> 
> Again, I bow to your tenacity...

haha amusant :)

> You did it right to circumvent the problem.

more importantly, to illustrate exactly what is needed.

because, if it can be done in a "hack" fashion, then it definitely can be done in a clean way too.
 
> I did correct the 5 extra lambda problem on the north side of the blocks
> and the missing geometry support.

ah fantastic, i saw something yesterday about it.

> Not commited yet.
> 
> I did also see what cause problem in the hierarchy exploration that
> cause Etesian to take all instances and not stop at selected levels
> of hierarchy. 

you saw, perhaps, one mess, i did "place" on the subctk_48_add (and 49 sub) and Etesian.place went, "oh, i will walk the VST file, and re-add all the sub-circuits of add and sub for you, nicely around in areas not covered by the manual 48 and 49"

!! :)

> I am looking for a clean solution. I hope it is done
> tomorrow.

it is tricky, to do this in a "quick" way.

i thought to suggest an extra field (bool flag) to say "ignore in etesian place" which toColoquinte would use.

another way, would be to say, "if FIXED PLACED UNPLACED" field, if not UNPLACED, ignore it.

however really the best way (perhaps for later) is to be able to pass a list of cells to place, to Etesian.place.

then, it becomes possible to set a *different* abutment box, and to call Etesian.place *multiple times*, each with a different subset of cells.

finally let katana do the routing.
Comment 149 Luke Kenneth Casson Leighton 2020-03-02 00:50:19 GMT
okok what you could do, is mark each subcell with a boolean flag "etesianignore", then call Etesian.place(), then *clear* some of those and call it again, repeat that.

this i suspect would be very quick to code up, and have toColoquinte check the boolean in the recursive tree walk of celks to place.
Comment 150 Luke Kenneth Casson Leighton 2020-03-02 16:19:28 GMT
Created attachment 29 [details]
screenshot of fpmul64

i figured just for lolz i'd try the IEEE754 FP 64-bit multiplier,
and see what happens.

so far it's been running for 10 minutes.  i think it said a 6000 x 6000 block...

  o  Driving Hurricane data-base.
     - Active AutoSegments .............................................. 152596
     - Active AutoContacts .............................................. 183414
     - AutoSegments ..................................................... 152935
     - AutoContacts ..................................................... 184092
     - Same Layer doglegs ............................................... 152935
     - Done in .................................................. 0.32s, 0 bytes
     - Raw measurements ................................ 0.322703s, +0Kb/701.9Mb
  o  Deleting ToolEngine<Katana> from Cell <fpmul64_cts>
Katana::Session::_open()

ok here we go:

===== Terminals .......... 1322  
===== Instances .......... 20233 
===== Connectors ......... 114375

-rw-r--r-- 1 lkcl lkcl  5389049 Mar  2 16:11 fpmul64_cts_r_ext.al

head fpmul64_cts.ap
V ALLIANCE : 6
H fpmul64_cts,P,2/3/2020,100
A 0,0,610000,615000


yes, so 6100 x 6150.  biiiig. and very pretty :)

i had to flatten it, otherwise it wouldn't work (some more of those
vst errors, jean-paul)
Comment 151 Jean-Paul Chaput 2020-03-04 00:50:40 GMT
(In reply to Luke Kenneth Casson Leighton from comment #148)
> (In reply to Jean-Paul.Chaput from comment #147)
> > (In reply to Luke Kenneth Casson Leighton from comment #146)
> > 
> > > but, for now, success!  it's a mess, but it works.

I should now have made the relevant corrections.

I put under alliance-check-toolkit/benchs/nmigen/ALU16 a
corrected version of your script. To run it graphically:

   $ make vst
   $ make cgt

   Then Tools -> Python Scripts -> doAlu16
   (or [SHIFT+P] [SHIFT+S])

   Then enjoy!

The result is extremely messy, as the connectors of the blocks
are very badly placed regarding each others.

If you run multiple timmes (without make clean; make vst) you
will get slightly different result as the alu16.vst is rewritten
with the order of it's instances/signals changeds).

Didn't answer earlier as my brain is mono-thread, so I process
one task at a time only. Now catching up with the others questions...
Comment 152 Luke Kenneth Casson Leighton 2020-03-04 08:41:34 GMT
(In reply to Jean-Paul.Chaput from comment #151)
> (In reply to Luke Kenneth Casson Leighton from comment #148)
> > (In reply to Jean-Paul.Chaput from comment #147)
> > > (In reply to Luke Kenneth Casson Leighton from comment #146)
> > > 
> > > > but, for now, success!  it's a mess, but it works.
> 
> I should now have made the relevant corrections.

ah! great! so i looked at doAlu16.py and it works as intuitively expected: place the larger blocks, then call Etesian.place() and that just places the *remaining* cells.

> I put under alliance-check-toolkit/benchs/nmigen/ALU16 a
> corrected version of your script. To run it graphically:
> 
>    $ make vst
>    $ make cgt
> 
>    Then Tools -> Python Scripts -> doAlu16
>    (or [SHIFT+P] [SHIFT+S])
> 
>    Then enjoy!

i can confirm it works.

i saw you put "BLOCKAGE1/2/3" in, i will look at that later.

> 
> The result is extremely messy, as the connectors of the blocks
> are very badly placed regarding each others.

yes, i had not got to the point of considering where best to put those.

now i can experiment with that.

> If you run multiple timmes (without make clean; make vst) you
> will get slightly different result as the alu16.vst is rewritten
> with the order of it's instances/signals changeds).

interesting.

> 
> Didn't answer earlier as my brain is mono-thread, so I process
> one task at a time only. 

sorry!  i did transfer to other things.

> Now catching up with the others questions...

ok.

would it help if i reported bugs on gitlab.lip6.fr? (i would need an account to do so)
Comment 153 Jean-Paul Chaput 2020-03-04 11:29:48 GMT
(In reply to Luke Kenneth Casson Leighton from comment #150)

On my little Dell XPS 13 9370, it takes 3 minutes (and 6 seconds, ok)
for the whole P&R. I commited changes so it successfully complete.

To use the nsxlib standard cell libraries you must use the nsxlib
"DESIGN_KIT" in the Makefile. It needs 15.4% of free space, with
the "bloat profile" nsxlib (see coriolis2/settings.py).
The size seems twice bigger (in lambda) because the lambda in
sxlib is half the one of sxlib (for better technology fitting).

I also did make a print of it, but it is more than 1Mb so I will
directly email it to you.

You where getting VST errors in LVX most likely because the router
did not complete. You must look at:

  o  Computing statistics.
     - Processeds Events Total .......................................... 252198
     - Unique Events Total .............................................. 139250
     - # of GCells ....................................................... 15376
     - Track Segment Completion Ratio .......................... 100% [139250+0]
     - Wire Length Completion Ratio .......................... 100% [13200080+0]
     - Wire Length Expand Ratio ......................... 4.64% [min:12614970.5]
     - Unrouted horizontals .......................................... -nan% [0]
     - Unrouted verticals ............................................ -nan% [0]
     - Done in ................................................. 22.48s, 105.5Mb
     - Raw measurements ............................ 22.4796s, +108008Kb/856.2Mb

The completion rates must be 100% and you must have "+0" and no unplaced segments
list afterwards.
Comment 154 Luke Kenneth Casson Leighton 2020-03-04 13:38:36 GMT
(In reply to Jean-Paul.Chaput from comment #153)
> (In reply to Luke Kenneth Casson Leighton from comment #150)
> 
> On my little Dell XPS 13 9370, it takes 3 minutes (and 6 seconds, ok)
> for the whole P&R. I commited changes so it successfully complete.
> 
> To use the nsxlib standard cell libraries you must use the nsxlib
> "DESIGN_KIT" in the Makefile. It needs 15.4% of free space, with
> the "bloat profile" nsxlib (see coriolis2/settings.py).
> The size seems twice bigger (in lambda) because the lambda in
> sxlib is half the one of sxlib (for better technology fitting).

thank you for fixing this.  it will make an interesting experiment,
at least.  i want to try breaking it down in a more advanced version
of alu16, later.
 
> I also did make a print of it, but it is more than 1Mb so I will
> directly email it to you.

ok.

> The completion rates must be 100% and you must have "+0" and no unplaced
> segments
> list afterwards.

ok - is that reported as a "success/fail" return code from the python functions? i have seen a number of places where routing doesn't complete and there is no warnings.

if so, then when the screen is covered in debug logs, from python we can
still catch important errors.
Comment 155 Luke Kenneth Casson Leighton 2020-03-04 14:06:21 GMT
Created attachment 30 [details]
patch to alliance-check-toolkit ALU16

hiya jean-paul,

ok i experimented with different abutment boxes, and got closer to what i would like.

i am recording this one for you because just above the add and sub blocks,
you can see that the vertical traces go nowhere. they go upwards, as if there
was an effort to then make a horizontal trace, but the decision was made
to go from a different vertical point.

i think it will be possible to reduce height by another 30, but not 50, because in the top 50 there are VIAs and those will clearly not fit into the (filled) space by green horizontal tracks that connect add and sub together.

the ideal layout for this particular example i think would be to have ADD and SUB input in the left of NORTH and their output on the *right* of NORTH (with the circuits doing a U-turn inside), then place ADD clockwise-by-90 and SUB anti-clockwise-by-90.

i realise it is a lot of trouble to go to, for such a small example,
however the principle is important and it is fast to complete, because
the blocks are so small.
Comment 156 Luke Kenneth Casson Leighton 2020-03-04 14:25:34 GMT
ok in mksym.sh dks.d/mosis.sh symlink was missing, i just added it.

i also switched off YOSYS_FLATTEN to see what would happen, and
a syntax error occurs in scnorm.vst.  subctk_409_specialcases
does not have "oz" connected to anything, i.e the VHDL is:

   , oz    =>

i noticed a *lot* of "VDD -> false" warnings, these are
on things that are currently unused.  looking back through
these warnings, it looks like the entirety of oz is
connected to $false (an output of zero - nothing - is
required from this module).
Comment 157 Jock Tanner 2020-03-05 19:25:30 GMT
I have Arch linux on my work machine. Would it be OK in terms of reproducibility if I use `schroot` and `debootstrap` from Arch repository? Or maybe it would be better to install Debian 10 in LXC container and proceed with debootstrap/schroot from there? I'm not considering KVM or dual boot, since I am somewhat limited on resources.
Comment 158 Jacob Lifshay 2020-03-05 19:30:41 GMT
(In reply to Jock Tanner from comment #157)
> I have Arch linux on my work machine. Would it be OK in terms of
> reproducibility if I use `schroot` and `debootstrap` from Arch repository?
> Or maybe it would be better to install Debian 10 in LXC container and
> proceed with debootstrap/schroot from there? I'm not considering KVM or dual
> boot, since I am somewhat limited on resources.

If you can get it to run in a Docker container by writing a Dockerfile, that would be quite useful, since that's much easier to set up CI for (and widely supported since Docker is the defacto standard for containerization).
Comment 159 Luke Kenneth Casson Leighton 2020-03-05 19:45:39 GMT
(In reply to Jacob Lifshay from comment #158)

> If you can get it to run in a Docker container by writing a Dockerfile, that
> would be quite useful, since that's much easier to set up CI for (and widely
> supported since Docker is the defacto standard for containerization).

except it's not what i've set up (and i'm really not keen on Docker - i'll
use it as a "last resort if forced to" rather than "actively desire and
enjoy using it").  i recently had the displeasure of being forced to use
it, and it was every bit the hack i was expecting it to be (total failure
to include "version" information in Dockerfiles).
Comment 160 Luke Kenneth Casson Leighton 2020-03-05 19:48:18 GMT
(In reply to Jock Tanner from comment #157)
> I have Arch linux on my work machine. Would it be OK in terms of
> reproducibility if I use `schroot` and `debootstrap` from Arch repository?
> Or maybe it would be better to install Debian 10 in LXC container and
> proceed with debootstrap/schroot from there? I'm not considering KVM or dual
> boot, since I am somewhat limited on resources.

https://www.archlinux.org/packages/community/any/debootstrap/

apparently, if that's correct, debootstrap is available for archlinux
in the community repo.  so you shouldn't need to go the trouble of a
virtual-machine-then-a-chroot.

the instructions for bind-mounting /dev, /dev/pts etc. should all work
perfectly fine as well.
Comment 161 Luke Kenneth Casson Leighton 2020-03-05 19:53:42 GMT
(In reply to Jacob Lifshay from comment #158)

> If you can get it to run in a Docker container by writing a Dockerfile, that
> would be quite useful, since that's much easier to set up CI for (and widely
> supported since Docker is the defacto standard for containerization).

jean-paul already has a Dockerfile written.  my only concern with Docker
is the fact that you have to run a root-level service, and it pisses about
with unionfs and all kinds of multi-way-mounting, *and* downloads things
from sources that we have *no idea* if dockerhub is secure, or properly
maintained, or if the team behind Docker are competent enough to do proper
GPG-signing on packages such that it doesn't *matter* if the main site gets
hacked.

none of this i want to spend the time reviewing in order to find
out, and given that they can't be bothered to put version information
into Dockerfiles so that you don't have your time wasted by downloading
a whole stack of crap only to find *at the last line* that you needed
a later version of docker-build...

debootstrap works.  it's worked for a long time, and it's worked well.
Comment 162 Jacob Lifshay 2020-03-05 19:59:05 GMT
a Dockerfile can use a different source other than dockerhub if you tell it to.
Comment 163 Luke Kenneth Casson Leighton 2020-03-05 20:07:15 GMT
(In reply to Jacob Lifshay from comment #162)
> a Dockerfile can use a different source other than dockerhub if you tell it
> to.

i do get that.  i had to just recently set up couchdb in Docker, for running
a kubernetes cluster. it was painful and i wasted almost 2 days banging my
head against a wall, getting it to work.

all of which is time spent working out from non-standard locations.
debootstrap uses debian-archive-keyring "by default", out-of-the-box,
and i know it grabs the minimum packages required for an operational
chroot.

it would be time spent messing about researching something that's not
worth the effort when there's an alternative tool (debootstrap) that
does the job.
Comment 164 Luke Kenneth Casson Leighton 2020-03-05 20:33:07 GMT
sorry, jacob: bit overkill there - docker was however really, _really_
tedious, and the takeaway lesson was definitely leaning towards "avoid".
if however the chroot setup really does get awkward, then docker is
the "next last resort" to consider.
Comment 165 Jock Tanner 2020-03-05 20:39:25 GMT
Luke, Jakob!

Although I am more familiar with virt-manager, I'm used to think of Docker as a good way of creating a reproducible test/build environment.

But to use the program built in Docker, fair and square, we must package it properly, then extract the artifacts (packages). It poses additional questions about packaging tools, target platforms, et c. That seems far beyond the point of this manual: https://libre-riscv.org/HDL_workflow/coriolis2/

Sure, it's possible to use a desktop program right from inside the container, without packaging. But it creates another kind of hassle. X forwarding and shared folders is what first comes to mind.

So I guess chroot just suits us better atm.
Comment 166 Luke Kenneth Casson Leighton 2020-03-05 22:01:49 GMT
ah! it's funny, i've been doing sysadmin stuff for so long, that i forget things: "it works this way" and can't entirely explain it clearly.  i had forgotten that i'd learned "export DISPLAY=:0.0" from inside a chroot then "xhosts +" from outside a chroot (or in this case, adding those to ~/.bash_profile) and graphical programs fire up with no trouble onto outside.
Comment 167 Jock Tanner 2020-03-05 22:21:23 GMT
Alliance build fails on Debian 10.

When executing `make -j1 install` (1.3):

> make[2]: Leaving directory '/home/tanner/alliance/build/documentation/tutorials'
> make[2]: Entering directory '/home/tanner/alliance/build/documentation'
> cd ../../alliance/alliance/src/documentation/overview; make overview.pdf
> ...
> ! Package inputenc Error: Invalid UTF-8 byte sequence.

and in the end

> make: *** [Makefile:573: install-recursive] Error 1

I can attach more detailed output, if necessary. But considering this

https://tex.stackexchange.com/q/429190

and this

https://packages.debian.org/search?keywords=texlive

TeX Live in Debian 10 is too fresh to build some pdf files.

But it seems like I can go on with `make -j1 install-exec` and `make -j1 install-data`. Am I right?
Comment 168 Luke Kenneth Casson Leighton 2020-03-05 22:36:39 GMT
(In reply to Jock Tanner from comment #167)

> 
> I can attach more detailed output, if necessary. But considering this
> 
> https://tex.stackexchange.com/q/429190

ok i forwarded to alliance-users, jean-paul should be able to pick it up

i was still on debian/9 partially upgraded to testing, so didn't spot it, sorry

> and this
> 
> https://packages.debian.org/search?keywords=texlive
> 
> TeX Live in Debian 10 is too fresh to build some pdf files.
> 
> But it seems like I can go on with `make -j1 install-exec` and `make -j1
> install-data`. Am I right?

see what happens when you get to the alliance-check-toolkit.  if the benchs wotk (nmigen alu, AM2600 something like that) then you got everything running.
Comment 169 Jock Tanner 2020-03-06 02:06:09 GMT
Another issue in Debian 10. Coriolis build (1.4) fails when searching for its vendored prerequisite libraries: AGDS, Cif, VLSISAPD et c. Attaching its output.

The problem is the build system puts vendored shared libs to

> ../Linux.x86_64/Release.Shared/install/lib64

, then searching for them in

> ../Linux.x86_64/Release.Shared/install/lib

Looks trivial, but I'm not good enough in CMake to propose a real solution. As a makeshift solution though, I created a symlink `lib` → `lib64`, and then the build succeeded.
Comment 170 Jock Tanner 2020-03-06 02:07:20 GMT
Created attachment 31 [details]
Coriolis build failed on Debian 10
Comment 171 Jock Tanner 2020-03-06 04:01:20 GMT
Created attachment 32 [details]
Where is the chicken?

I've reached this point:

> Tutorials / Run Demo (Python Flavour)

But instead of the cute chicken I see this.
Comment 172 Jock Tanner 2020-03-06 04:16:15 GMT
I'm trying to run benches.

> tanner@ibmpc:~$ make

And then I got

> make: *** [mk/alliance.mk:56: dreal] Segmentation fault (core dumped)

This happens after the `dreal` successfully opens the display.
Comment 173 Luke Kenneth Casson Leighton 2020-03-06 11:03:39 GMT
(In reply to Jock Tanner from comment #169)
> Another issue in Debian 10. Coriolis build (1.4) fails when searching for
> its vendored prerequisite libraries: AGDS, Cif, VLSISAPD et c. Attaching its
> output.
> 
> The problem is the build system puts vendored shared libs to
> 
> > ../Linux.x86_64/Release.Shared/install/lib64
> 
> , then searching for them in
> 
> > ../Linux.x86_64/Release.Shared/install/lib

aw doh
 
> Looks trivial, but I'm not good enough in CMake to propose a real solution.
> As a makeshift solution though, I created a symlink `lib` → `lib64`, and
> then the build succeeded.

hm odd.  i really need to run through this as well.

so there was a reason why we started with debian/9.  sigh.

the issue is that if you start with debian/9 it doesn't have an advanced-enough version of python 3 to get underway with nmigen in the same system.  i gave serious consideration to scripting a from-scratch source-build of python 3 (which is what you need to do if installing pypy3, which we did a few months ago).

looovely.

ok i'll try replicating where you've got to.
Comment 174 Luke Kenneth Casson Leighton 2020-03-06 11:35:47 GMT
(In reply to Jock Tanner from comment #171)
> Created attachment 32 [details]
> Where is the chicken?
> 
> I've reached this point:
> 
> > Tutorials / Run Demo (Python Flavour)
> 
> But instead of the cute chicken I see this.

cool!  that's the correct thing.  success!
Comment 175 Luke Kenneth Casson Leighton 2020-03-06 11:38:37 GMT
(In reply to Jock Tanner from comment #169)
> Another issue in Debian 10. Coriolis build (1.4) fails when searching for
> its vendored prerequisite libraries: AGDS, Cif, VLSISAPD et c. Attaching its
> output.
> 
> The problem is the build system puts vendored shared libs to
> 
> > ../Linux.x86_64/Release.Shared/install/lib64
> 
> , then searching for them in
> 
> > ../Linux.x86_64/Release.Shared/install/lib

add export LD_LIBRARY_PATH=${ALLIANCE_TOP}/lib64:${LD_LIBRARY_PATH} into
~/.bashrc_profile

i updated the wiki file to reflect this.
Comment 176 Jean-Paul Chaput 2020-03-06 12:32:21 GMT
(In reply to Jock Tanner from comment #170)
> Created attachment 31 [details]
> Coriolis build failed on Debian 10

The Coriolis ccb.py is trying to guess the system it is running under by
the result of "uname -srm". When using a chrooted environment (or docker
for that matter), it may got confused because the kernel is still the
one of the host (in your case Arch Linux).

Could you send me the result of "uname -srm".
And what is the rightful location to use, "lib/" or "lib64/" ?

And, responding to comment #173 from Luke, we should head for Debian 10.
I will make the corrections in a few days
Comment 177 Jock Tanner 2020-03-06 15:35:46 GMT
(In reply to Jean-Paul.Chaput from comment #176)
> (In reply to Jock Tanner from comment #170)
> > Created attachment 31 [details]
> > Coriolis build failed on Debian 10
> 
> The Coriolis ccb.py is trying to guess the system it is running under by
> the result of "uname -srm". When using a chrooted environment (or docker
> for that matter), it may got confused because the kernel is still the
> one of the host (in your case Arch Linux).

I have installed Debian 10 & LXDE in KVM and went through the whole process again, starting with schroot/debootstrap. The bug with `lib`-`lib64` remains.

> Could you send me the result of "uname -srm".
> And what is the rightful location to use, "lib/" or "lib64/" ?

Under Arch it is ‘Linux 5.5.3-arch1-1 x86_64’. Under Debian 10 it is ‘Linux 4.19.0-8-amd64 x86_64’.

There is also a Python-related difference among the hosts maybe worth noting. `python` command is an alias of `python2` in Ubuntu and most other Linuxes, but `python3` in Arch and its derivatives (Manjaro, Anarchy). But I think this difference should not affect the chrooted environment.
Comment 178 Jock Tanner 2020-03-06 15:37:20 GMT
> And what is the rightful location to use, "lib/" or "lib64/" ?

Of that I am not sure.
Comment 179 Luke Kenneth Casson Leighton 2020-03-06 16:03:35 GMT
(In reply to Jock Tanner from comment #178)
> > And what is the rightful location to use, "lib/" or "lib64/" ?
> 
> Of that I am not sure.

it'll be "what the compiler infrastructure installs things in" which of course has changed between debian 9 and debian 10.

sigh.

the simplest thing to do, jean-paul, is just to have coriolisEnv.py add *both*.  it really doesn't matter (as long as people clear out the install directory if upgrading from debian 9 to debian 10)
Comment 180 Luke Kenneth Casson Leighton 2020-03-06 16:30:36 GMT
ok jock i replicated things, all good.  yes you're right, do that symlink
for now.  i've updated the page https://libre-riscv.org/HDL_workflow/coriolis2/
can you double-check it looks reasonable.

the next thing to do would be to make sure nmigen is installed correctly as well as yosys from latest source code, and try some of the soclayout experiments.
some of them aren't fully documented, you can see in this bugreport i say things like "run make vst then python doAlu16.py" rather than just "make lvx", however just go through them and see how you get on ok?

cole if you can also run through that it would be good.  remember: *don't waste time* trying to "work out", just literally and blindly follow the instructions, and if you don't understand *ask immediately* ok?
Comment 181 Jock Tanner 2020-03-06 16:51:44 GMT
(In reply to Luke Kenneth Casson Leighton from comment #173)

> … i gave serious consideration to scripting a from-scratch
> source-build of python 3 (which is what you need to do if installing pypy3,
> which we did a few months ago).

Did you use pyenv back then? If not, I would highly recommend it.

(In reply to Luke Kenneth Casson Leighton from comment #180)
> ok jock i replicated things, all good.  yes you're right, do that symlink
> for now.  i've updated the page
> https://libre-riscv.org/HDL_workflow/coriolis2/
> can you double-check it looks reasonable.

There was also a minor quirk I did not mention.

When you do (1.3)

> git clone https://www-soc.lip6.fr/git/alliance.git

sources appears in '~/alliance/alliance/alliance/src', not in '~/alliance/alliance/src'. The rest of the paths are correct.

> the next thing to do would be to make sure nmigen is installed correctly as
> well as yosys from latest source code, and try some of the soclayout
> experiments.
> some of them aren't fully documented, you can see in this bugreport i say
> things like "run make vst then python doAlu16.py" rather than just "make
> lvx", however just go through them and see how you get on ok?

TBH I could not grasp that instructions at all. =)

I created my `user-LOGIN.mk` and run `make` in a different folders. I came to realization that my Arch-hosted chroot is broken: every attempt at `make` has ended up in `dreal` segfaulted. But in Debian 10-hosted chroot I managed to get either an empty `dreal` window or an error message other than segfault.
Comment 182 Jock Tanner 2020-03-06 16:55:28 GMT
Created attachment 34 [details]
This is how my “empty dreal window” looks like

Just in case.
Comment 183 Luke Kenneth Casson Leighton 2020-03-06 17:16:37 GMT
(In reply to Jock Tanner from comment #181)
> (In reply to Luke Kenneth Casson Leighton from comment #173)
> 
> > … i gave serious consideration to scripting a from-scratch
> > source-build of python 3 (which is what you need to do if installing pypy3,
> > which we did a few months ago).
> 
> Did you use pyenv back then? If not, I would highly recommend it.
> 
> (In reply to Luke Kenneth Casson Leighton from comment #180)
> > ok jock i replicated things, all good.  yes you're right, do that symlink
> > for now.  i've updated the page
> > https://libre-riscv.org/HDL_workflow/coriolis2/
> > can you double-check it looks reasonable.
> 
> There was also a minor quirk I did not mention.
> 
> When you do (1.3)
> 
> > git clone https://www-soc.lip6.fr/git/alliance.git
> 
> sources appears in '~/alliance/alliance/alliance/src', not in
> '~/alliance/alliance/src'. The rest of the paths are correct.

ok great, do you want to alter that in the wiki, to get familiar
with doing that?

> > the next thing to do would be to make sure nmigen is installed correctly as
> > well as yosys from latest source code, and try some of the soclayout
> > experiments.
> > some of them aren't fully documented, you can see in this bugreport i say
> > things like "run make vst then python doAlu16.py" rather than just "make
> > lvx", however just go through them and see how you get on ok?
> 
> TBH I could not grasp that instructions at all. =)

:)

cd ~/soclayout
cd experiment
git pull
make vst
make view

that should do it
 
> I created my `user-LOGIN.mk` and run `make` in a different folders. 

yep you don't want to be running "make", because that picks the first thing
rather than the targetted requirements.


>I came
> to realization that my Arch-hosted chroot is broken: every attempt at `make`
> has ended up in `dreal` segfaulted. But in Debian 10-hosted chroot I managed
> to get either an empty `dreal` window or an error message other than
> segfault.

you don't want dreal although it is quite pretty (just tried it)

did you remember "xhost +" in the outside system?

try "apt-get install xterm" just to see if that's happy.  if not, then you may need to do something strange, like install xnest (and set export DISPLAY=1.0 or however xnest works), or use ssh -X forwarding or... you get the idea.  remember to install and run even the most basic window manager onto the Xnest (twm -display :1) otherwise everything ends up top-left corner and no window bars

https://box.matto.nl/xnest.html

i just tried it, and it works fine.

we do specifically recommend that the host system is debian, rather than a  hybrid (not even ubuntu).  this avoids precisely these kinds of issues.
Comment 184 Luke Kenneth Casson Leighton 2020-03-06 17:21:13 GMT
https://libre-riscv.org/HDL_workflow/coriolis2/

added Xnest section, at the end.  doesn't need ssh forwarding.  doesn't
need xhost +
Comment 185 Jock Tanner 2020-03-06 18:24:01 GMT
(In reply to Luke Kenneth Casson Leighton from comment #183)

> ok great, do you want to alter that in the wiki, to get familiar
> with doing that?

Sure. Do I need an account for that?
 
> cd ~/soclayout
> cd experiment
> git pull
> make vst
> make view
> 
> that should do it

I tried

> $ git clone ssh://gitolite3@git.libre-riscv.org:922/libresoc.git

to get the source, but first I got password confirmation requests

> gitolite3@git.libre-riscv.org's password:

and then, after a dozen attempts − 

> ssh: connect to host git.libre-riscv.org port 922: Connection refused

I had my private key in ‘~/.ssh’ folder on chroot environment. Am I doing something wrong, or is it a thing with the git server?
  
> you don't want dreal although it is quite pretty (just tried it)
> 
> did you remember "xhost +" in the outside system?
> 
> try "apt-get install xterm" just to see if that's happy.  if not, then you
> may need to do something strange, like install xnest (and set export
> DISPLAY=1.0 or however xnest works), or use ssh -X forwarding or... you get
> the idea.  remember to install and run even the most basic window manager
> onto the Xnest (twm -display :1) otherwise everything ends up top-left
> corner and no window bars
> 
> https://box.matto.nl/xnest.html
> 
> i just tried it, and it works fine.

No no no, it's broken, but in more subtle way. =) `xterm` (and `cgt`) are working fine just with “DISPLAY=:1”. The problem only applies to `dreal`.
Comment 186 Luke Kenneth Casson Leighton 2020-03-06 18:33:01 GMT
(In reply to Jock Tanner from comment #185)
> (In reply to Luke Kenneth Casson Leighton from comment #183)
> 
> > ok great, do you want to alter that in the wiki, to get familiar
> > with doing that?
> 
> Sure. Do I need an account for that?

yes just create something - whatever you like.  if it's more convenient i can give you access to the ikiwiki git repository
  
> > cd ~/soclayout
> > cd experiment
> > git pull
> > make vst
> > make view
> > 
> > that should do it
> 
> I tried

make view or make cgt then open the cell manually (Ctrl-O) and then type the name of any of the *.ap files, without the extension.
 
> > $ git clone ssh://gitolite3@git.libre-riscv.org:922/libresoc.git
> 
> to get the source, but first I got password confirmation requests

ah whoops, yes, don't do passwords, that's important: all that will happen is, your IP address gets instantly banned when the failure turns up in /var/log/auth.log

 
> > gitolite3@git.libre-riscv.org's password:
> 
> and then, after a dozen attempts − 
> 
> > ssh: connect to host git.libre-riscv.org port 922: Connection refused
> 
> I had my private key in ‘~/.ssh’ folder on chroot environment. Am I doing
> something wrong, or is it a thing with the git server?

ah, arse.  no, i forgot to say: don't try password auth, because that's a sign to fail2ban to instantly ban your ip address.

can you find out your external public ip address and let me know what it is by email?  i'll need to whitelist it.

the reason i set draconian fail2ban rules is because the server gets several hundred ssh attacks *a day*.  there are therefore usually between 80 to 150 ssh "banned" IP addresses, all the time. i got fed up with it and set extremely strict rules.


> No no no, it's broken, but in more subtle way. =)

ok.  well, we're not using dreal.

>  `xterm` (and `cgt`) are
> working fine just with “DISPLAY=:1”. The problem only applies to `dreal`.

we're not using dreal, so it's not a problem.  only cgt (i.e. "make view")
Comment 187 Jock Tanner 2020-03-10 04:17:15 GMT
My working environment is finally set up.

I have fixed a typo regarding virtual environment activation in 6.5 of HDL workflow. I also mentioned the dependencies (either python3-venv or python-virtualenv) that was kinda taken for granted. I think now the documentation is Buster-proof.

Maybe now it's a good time to script the whole process? Or is it not an issue?

I've also been thinking of up- and downsides of using chroot vs Docker vs native OS packaging, and soon I have realized that the biggest downside of our current setup is that it can be problematic for me to unleash the full potential of my IDE on this project. By “full potential” I mean visual debugging and inspection. I am using Pycharm, but this applies to any fairly advanced IDE, like Eclipse/PyDev. They usually rely on ‘native’ (/usr/bin/) or virtual (provided by virtualenv, conda et c.) Python prefixes for code inspection, and chroot is neither of them.

So I started thinking about how to get rid of chroot. As I learned by delving into #178, we are dealing with 4 categories of tools:

1. Native packages, provided by Debian repositories: binutils, gcc, clang, python 2/3, latex, gtkwave et c.

2. Python packages provided by PyPI: jinja2 and sphinx. They are also present in Debian repositories, so this category can be merged with 1 if necessary.

3. Binary packages than require bulding from source: alliance, coriolis 2, yosys, symbiyosys.

4. Python packages that (at least their latest versions) are not in PyPI: nmutil, nmigen, ieee754fpu, sfpy.

Category 1 can be used right away without chroot or any other isolation tool. Isolation of categories 2 and 4 may be carried out with virtualenv, which is a pythonista's good old friend. Category 4 may or may not require building, it does not matter in this case. Only category 3 poses a problem.

When using virtualenv, $VIRTUAL_ENV directory can be used for building and installing not only Python extensions, but roughly any FHS-compatible programs and libraries. Basically you just have to supply --prefix=$VIRTUAL_ENV to the build configuration script, and virtualenv magic do the rest. And you need no system-wide configs, and no root access for this.

For example, xapian-haystack installs Xapian alone with its Python bindings right in $VIRTUAL_ENV: https://github.com/notanumber/xapian-haystack/. (Xapian itself written in C++ and do not require Python to work.)

How nice it would be if I could do the same with our category 3!

Sadly, this document (https://www-soc.lip6.fr/sesi-docs/coriolis2-docs/coriolis2/en/html/users-guide/UsersGuide.html#fixed-directory-tree) suggests that the build process of Coriolis 2 can not be tamed to my needs, at least not so easy as with `--prefix=$VIRTUAL_ENV`.

Looks like I'm at the point of installing another copy of Pycharm into chroot. Or is there some kind of an alternative solution that I did not consider?
Comment 188 Yehowshua 2020-03-10 04:22:48 GMT
Docker works really well usually and is repeatably deployable with CI etc etc.

We should really get a solid workflow going - a bad workflow can bring projects to a crashing halt down the line.

Also, I do dev from my Mac mostly, I can spin up Linux docker containers on Mac easily - chroot not so much…

What we usually go in the webbed world is you have your code in a repository, which you clone to your host machine. The Docker container then spins up and binds to the cloned repo, and can connect to ports on your host machine exposing services.

So you can run pyCharm over your git repo, and all the tools sit inside the Docker container.

Yehowshua
Comment 189 Yehowshua 2020-03-10 04:28:10 GMT
> By “full potential” I mean visual debugging and inspection. I am using Pycharm, but > this applies to any fairly advanced IDE, like Eclipse/PyDev. They usually rely on ‘native’ (/usr/bin/) or virtual (provided by virtualenv, conda et c.) Python prefixes > for code inspection, and chroot is neither of them.

@Jock Tanner,

Do you think binding a container to a repo on host would solve your pyCharm needs?
You would have pyCharm on host...

You can also have a python REPL in your Container, and the connect to that Python instance over a port the docker container exposes...

I've seen that done before.

Yehowshua
Comment 190 Yehowshua 2020-03-10 04:28:35 GMT
> By “full potential” I mean visual debugging and inspection. I am using Pycharm, but > this applies to any fairly advanced IDE, like Eclipse/PyDev. They usually rely on ‘native’ (/usr/bin/) or virtual (provided by virtualenv, conda et c.) Python prefixes > for code inspection, and chroot is neither of them.

@Jock Tanner,

Do you think binding a container to a repo on host would solve your pyCharm needs?
You would have pyCharm on host...

You can also have a python REPL in your Container, and the connect to that Python instance over a port the docker container exposes...

I've seen that done before.

Yehowshua
Comment 191 Yehowshua 2020-03-10 04:30:41 GMT
(In reply to Yehowshua from comment #190)
> You can also have a python REPL in your Container, and the connect to that
> Python instance over a port the docker container exposes...

** connect that to a pyCharm instance on host

That is connect, the python instance in the docker container to the pyCharm instance on host.
Comment 192 Luke Kenneth Casson Leighton 2020-03-10 05:03:51 GMT
(In reply to Jock Tanner from comment #187)
> My working environment is finally set up.
> 
> I have fixed a typo regarding virtual environment activation in 6.5 of HDL
> workflow. I also mentioned the dependencies (either python3-venv or
> python-virtualenv) that was kinda taken for granted. 

ahh hang on. that will need discussion under a separate bugreport (jock csn you raise one, 4am here at thr moment).  pip is seriously compromised (zero security and zero replicability).

oh, it is not for coriolis2, it's for sfpy.

> I think now the
> documentation is Buster-proof.

great.

> Maybe now it's a good time to script the whole process? Or is it not an
> issue?

hmm as long as each of us, in the small team, have the same build i'm happy for now. 

when we start to do replicable builds we will need to get stricter about this.

at that time we will use specific frozen git tags, specific build dependencies and absolutely nothing else.  no editors, no dev tools, nothing.

dokkka would be "perfect" for that where right now it just gets in the way.


> I've also been thinking of up- and downsides of using chroot vs Docker vs
> native OS packaging,

see jean paul's earlier comment in this thread about replicability.

OS packaging does not help to ensure absolute identical setups.

> and soon I have realized that the biggest downside of
> our current setup is that it can be problematic for me to unleash the full
> potential of my IDE on this project. 

three options, very simple.

1. git push and git pull between tge two.  use the server to do so (please don't create branches, use master).

2. mount-bind the chroot's home directory to outside where you use the preferred editors etc.

3. don't even bother doing that, just run the preferred editors in /home/chtoot/coriolis/home/username/etcetcetc.

there are probably others.

installing full editor enviromnents in the chroot is not a desirable goal. the coriolis2 guis are tolerated because it is timeconsuming to remove them from the untegrated dev environment.  i would be much happier if they weren't a necessary install prerequisite, to be installed *outside* the chroot.

anyway, you have things running, thus is good.  can you take a look at the bencgs nmigen doAlu16.py and experiments5 in soclayout?

you can see i am trying to get blocks done in a hierarchical fashion.  set the inputs on NORTH and outputs on SOUTH then autoroute them, then *use* those blocks to make a bigger block, then use that bigger block in an even bigger block.

each time you say which side the inputs and outputs are to go on.

so a really useful meta-function will be to do the same thing as ioring.py

look in the aliance-check-toolkit benhs find . -name ioring.py you will see what it does.
Comment 193 Luke Kenneth Casson Leighton 2020-03-10 05:15:41 GMT
(In reply to Yehowshua from comment #188)
> Docker works really well usually and is repeatably deployable with CI etc
> etc.

dokkahh will seriously get in the way at this phase.

when we have to do repeatable builds on fixed git tags *then* it is "perfect".

we need right now to be able to git pull on five to eight different repos and using and relying on a dokkaah container to do that job (badly) is just going to piss me off.

a debian stable (buster) chroot however is a minimal compromise that does not force everyone to back their main system down to debian buster, yet still allows clean git pulls (and pushes) allowing everyone to keep in sync.

even distro packaging would be a nuisance because we need to do regular git pulls and rebuilds of alliance etc.

so please, jock: don't install "preferred  editor environment" in the chroot.  keep it absolutely minimalist.
Comment 194 Jock Tanner 2020-03-10 07:03:17 GMT
(In reply to Luke Kenneth Casson Leighton from comment #192)
> ahh hang on. that will need discussion under a separate bugreport (jock csn
> you raise one, 4am here at thr moment).

Sorry I didn't mean to wake you up (or kept you awake, whatever).

>  pip is seriously compromised (zero
> security and zero replicability).

Yikes! As a maintainer of a PyPI package, I have great difficulties even imagining pip has such a serious flaw. Honestly.

> oh, it is not for coriolis2, it's for sfpy.

Yes, it's for sfpy. I did not bring a new dependency, I only mentioned it explicitly. Does it really deserves a bug report?
Comment 195 Jock Tanner 2020-03-10 09:59:11 GMT
(In reply to Yehowshua from comment #190)
> @Jock Tanner,
> 
> Do you think binding a container to a repo on host would solve your pyCharm
> needs?
> You would have pyCharm on host...
> 
> You can also have a python REPL in your Container, and the connect to that
> Python instance over a port the docker container exposes...
> 

It is possible, I've also seen this before. But remote debugging is 

- slow things down,

- requires certain steps to inject Pycharm debugging host into the container (I may be mistaken, but last time `pip install` did not suffice),

- not as polished as a regular debugging, at least a couple of years ago there were some nasty Pycharm bugs.

So I'd really prefer to run my target locally.
Comment 196 Jock Tanner 2020-03-10 10:13:47 GMT
(In reply to Luke Kenneth Casson Leighton from comment #192)
> three options, very simple.
> 

These are the options of accessing program text. What I'd like to have is an option of accessing the interpreter. I can point Pycharm to run/pause/inspect either a system level interpreter, or a virtualenv'd one. Problem is Pycharm can not work with an interpreter that installed in chroot, being outside of it. There is an option of remote debugging, as @Yehowshua mentions, but it has its cons.
 
> anyway, you have things running, thus is good.  can you take a look at the
> bencgs nmigen doAlu16.py and experiments5 in soclayout?
> 
> you can see i am trying to get blocks done in a hierarchical fashion.  set
> the inputs on NORTH and outputs on SOUTH then autoroute them, then *use*
> those blocks to make a bigger block, then use that bigger block in an even
> bigger block.
> 
> each time you say which side the inputs and outputs are to go on.
> 
> so a really useful meta-function will be to do the same thing as ioring.py
> 
> look in the aliance-check-toolkit benhs find . -name ioring.py you will see
> what it does.

I'll take a look right now.
Comment 197 Luke Kenneth Casson Leighton 2020-03-10 11:51:23 GMT
(In reply to Jock Tanner from comment #196)
> (In reply to Luke Kenneth Casson Leighton from comment #192)
> > three options, very simple.
> > 
> 
> These are the options of accessing program text. What I'd like to have is an
> option of accessing the interpreter.

ahh ok.  *thinks*.

you could try to "virtualenv" the version of python inside the chroot to *outside* the chroot by executing it from outside, with suitable LD_LIBRARY_PATHs and PYTHON_PATHs etc.  those will simply be extended "/home/chroot/coriolis/usr/lib/python3.7/dist-packages" etc. although that's more likely to work if you're using debian-on-debian.

all quite a lot of hassle that is not a good idea even if using docckkkaa because we need to *not* install arbitrary software inside the chroot, for risk that it "upgrades" or installs additional software dependencies in a way that adversely affects builds.

one other possible route would be pyro.  (python remote-objects).  it works.  it's pretty minimalist.  you *might* get away with it: just try not to abuse it, and bear in mind that when it comes to final production builds, we *really* cannot have anything other than the absolute minimum in the chroot.

yesyes i know i said "no extra software" then suggested installing pyro.  if i'm installing vim and editing inside the chroot, you can sneak pyro in :)
Comment 198 Luke Kenneth Casson Leighton 2020-03-10 12:04:40 GMT
(In reply to Jock Tanner from comment #194)
> (In reply to Luke Kenneth Casson Leighton from comment #192)
> > ahh hang on. that will need discussion under a separate bugreport (jock csn
> > you raise one, 4am here at thr moment).
> 
> Sorry I didn't mean to wake you up (or kept you awake, whatever).

my choice :)

> >  pip is seriously compromised (zero
> > security and zero replicability).
> 
> Yikes! As a maintainer of a PyPI package, I have great difficulties even
> imagining pip has such a serious flaw. Honestly.

it's set up in a similar way to nodejs.  not quite as bad as this but nearly.

https://www.eweek.com/security/node.js-event-stream-hack-exposes-supply-chain-security-risks

absolutely and critically dependent on the website - pypi.org - for its security.  do you GPG sign packages before uploading them? what happens if pypi - the website - is hacked?

a solution is to *pre-install* debian python dependency packages *before* running "python setup.py develop".  this *stops* pip going out and arbitrarily and blithely downloading random crap that's completely unsigned and you have absolutely no idea if it's been vetted.

but (A) those packages have to exist and (B) you have to do a careful analysis of the requirements.txt and the setup.py *before* running setup.py

therefore off it goes, wandering blithely through an *UNVETTED* web site installing *ARBITRARY* software.

https://www.zdnet.com/article/twelve-malicious-python-libraries-found-and-removed-from-pypi/
https://www.bleepingcomputer.com/news/security/ten-malicious-libraries-found-on-pypi-python-package-index/
https://www.reddit.com/r/linux/comments/709a4t/pypi_compromised_by_fake_software_packages/
https://developers.slashdot.org/story/19/12/04/1430223/two-malicious-python-libraries-caught-stealing-ssh-and-gpg-keys

that last one was only 3 months ago.

hence the bugreport.

> > oh, it is not for coriolis2, it's for sfpy.
> 
> Yes, it's for sfpy. I did not bring a new dependency, I only mentioned it
> explicitly. Does it really deserves a bug report?

mm... yeah.  we need to make sure that what we're doing is properly replicable: that "arbitrary installs and upgrades" don't screw things up.

it's bad enough that nmigen's dependencies aren't packaged for debian, so we *have* to rely on pip.

that means that when it comes to replicable builds, we will have to *manually* install nmigen's dependencies - giving specific versions of them - in order to *stop* pip going out and arbitrarily grabbing "The Latest And Greatest Whatever".
Comment 199 Luke Kenneth Casson Leighton 2020-03-10 12:31:03 GMT
https://github.com/tomerfiliba/rpyc

i meant rpyc not pyro4, specifically ZeroDeploy:
https://rpyc.readthedocs.io/en/latest/docs/zerodeploy.html#zerodeploy

however for "easiest" deployment it may need sshd installed in the chroot and set to a non-standard ssh port in /home/chroot/coriolis/etc/ssh/sshd_config

otherwise you will have to futz about trying to work out the manual list of things right there in paragraph one "Zero-Deploy RPyC", over a chroot.

all of which is quite a lot of hassle, particularly as even once rpyc works, you still have to work out how to get it to work under pycharm.

you *might* simply be able to do this (not involving rpyc) after getting sshd running on a non-standard port number in the chroot:

https://www.jetbrains.com/help/pycharm/configuring-remote-interpreters-via-ssh.html



now, if there was a *really clear benefit* - bear in mind that coriolis2 is *not* a standard "straight python" module, it's a computationally-heavy hybrid c++ python module - i'd say it was worth pursuing.

really, i really can't think of any good reasons that pycharm brings which can't be had *outside* the chroot, by installing the source code *outside* the chroot (soclayout, nmigen, everything).

if you *really* need pycharm to do "import Hurricane" in order to walk it, can i suggest attempting installation of coriolis2 and alliance under archlinux, although please don't spend too much time doing so.
Comment 200 Jock Tanner 2020-03-11 12:35:00 GMT
(In reply to Luke Kenneth Casson Leighton from comment #197)
>
> yesyes i know i said "no extra software" then suggested installing pyro.  if
> i'm installing vim and editing inside the chroot, you can sneak pyro in :)

Well, if “no extra software” is the main concern, then I guess my initial idea of installing Pycharm locally in chroot will do just fine. It's completely self-contained and requires no modification to the system, with the exception of:

- setup folder (can be installed in home folder, no root required),
- global settings folder (also in home folder),
- project settings folder (can be put in '.git/info/exclude' or '.gitignore', which I usually do).

Delete that 3 objects, and you'll never remember that you used it.

Actually, I've tried it already, and all went well with one exception: I can't get binary extensions indexed. Yes, I'd like to walk through Hurricane attributes in editor. =) I think it's worth a little investigation.

(In reply to Luke Kenneth Casson Leighton from comment #198)
> absolutely and critically dependent on the website - pypi.org - for its
> security.  do you GPG sign packages before uploading them? what happens if
> pypi - the website - is hacked?

Yes, it depends on pypi.org to store and transfer packages. And on SSL chain of trust to know if pypi.org is authentic. If pypi.org or chain of trust are hacked, we're all hacked.

On the other hand, the processing of signatures is significantly increasing an attack surface by inclusion of the corresponding tools in the process, on both client and server sides. And, if a process is more complex, people are tend to make more mistakes.

The model of trust is also important here. Should I trust the owner of a certain site? The issuer of a certain PGP signature? The ISP?

Thankfully, all components, involved in the process: twine, cheeseshop, pip and setuptools − are quite versatile, and the process can be easily tailored to your model of trust. You can use the VCS of your choice on a trusted host to store packages, or set up your own PyPI.

You can also place your signature somewhere in your PyPI package, and then modify setuptools to verify the signature along with the package hash before running setup.py. I think it should be fairly easy to implement.

> a solution is to *pre-install* debian python dependency packages *before*
> running "python setup.py develop".  this *stops* pip going out and
> arbitrarily and blithely downloading random crap that's completely unsigned
> and you have absolutely no idea if it's been vetted.

Package pinning in pip is out there since I don't remember when. I think there were some difficulties in pinning pip itself, but I suppose they are gone now. pip should be ready for reproducible builds.

The problem of malicious packages in PyPI can be also mitigated by package pinning, although this is more a social problem, than a technical one.
Comment 201 Luke Kenneth Casson Leighton 2020-03-11 12:45:57 GMT
(In reply to Jock Tanner from comment #200)

> Well, if “no extra software” is the main concern, then I guess my initial
> idea of installing Pycharm locally in chroot will do just fine. It's
> completely self-contained and requires no modification to the system, with
> the exception of:
> 
> - setup folder (can be installed in home folder, no root required),
> - global settings folder (also in home folder),
> - project settings folder (can be put in '.git/info/exclude' or
> '.gitignore', which I usually do).
> 
> Delete that 3 objects, and you'll never remember that you used it.
> 
> Actually, I've tried it already, and all went well 

excellent

> with one exception: I can't get binary extensions indexed. 

this does not surprise me at all.  and you won't find anything anyway:
there are zero docstrings 

> Yes, I'd like to walk through Hurricane
> attributes in editor. =) I think it's worth a little investigation.

you will likely need the coriolisEnv.py settings. i believe pycharm
actually *executes* the module (imports it, which can have side-effects).

clearly, to execute the Hurricane.so that involves being able to load
the dynamic libraries it's linked to.

that in turn involves having coriolisEnv.py environment variables set up

it also implies having access to the entire coriolis binaries and libraries.

anyway - focus.

let me know how you get on with the analogy of creating something similar to ioring.py, to create "Pads" that you can see in the experiments5 and alu_hier nmigen benchs.  you might even be able to use the code *from* coriolis2 that imports ioring.py as a base.
Comment 202 Jock Tanner 2020-03-11 12:59:49 GMT
(In reply to Luke Kenneth Casson Leighton from comment #201)
> this does not surprise me at all.  and you won't find anything anyway:
> there are zero docstrings 
> 

Well, actually 'help(Hurricane)' gives a lot of info, that Pycharm should use for syntax check, code completion, et c. I wonder why it has failed this time.
Comment 203 Jock Tanner 2020-03-11 14:51:22 GMT
(In reply to Luke Kenneth Casson Leighton from comment #201)
> let me know how you get on with the analogy of creating something similar to
> ioring.py, to create "Pads" that you can see in the experiments5 and
> alu_hier nmigen benchs.  you might even be able to use the code *from*
> coriolis2 that imports ioring.py as a base.

Can the solution looks like this:

> ('experiment5/alu_heir.py')
>
> def create_pads(top_element, pins, file_name):
>     for pin in pins:
>         # some introspective magic
>         # as well as a bunch of guessing
>         pass
> 
>     with open(file_name, "w") as ioring:
>         # a bit of templating magic with jinja2
>
>
>         ...
>
>
> if __name__ == "__main__":
>     alu = ALU(width=16)
>     pins = [alu.op, alu.a, alu.b, alu.o]
>     create_pads(alu, pins, "coriolis2/ioring.py")
>     create_ilang(alu, pins, "alu_hier")
>
Comment 204 Luke Kenneth Casson Leighton 2020-03-11 15:35:53 GMT
(In reply to Jock Tanner from comment #203)

> Can the solution looks like this:
> 
> > ('experiment5/alu_heir.py')
> >
> > def create_pads(top_element, pins, file_name):
> >     for pin in pins:
> >         # some introspective magic
> >         # as well as a bunch of guessing
> >         pass
> > 
> >     with open(file_name, "w") as ioring:
> >         # a bit of templating magic with jinja2

ermmm ermermerm i like the idea of keeping things in an external file

*thinks*... actually, CSV format would be better, because it's 3 lines of code:

    import csv # standard python library
    with open(file_path, 'r') as csvfile:
        return list(csv.DictReader(csvfile))

remember, we need to minimise the dependencies (see HDL_workflow, and see
discussion we just had!)

in CSV format, it could be:

SIDE,pinname,offset,comment
NORTH,clk,,,

the defaults would be, if there is no offset, to make one.  the order of
the pins in the CSV file would define the order in which they appear on
that side.

i think that would work really well.  later we can consider actually embedding
the CSV file contents *into* the docstring (or a function) of the actual nmigen source code.

or something.

if it turns out to be more sophisticated than that, we can revisit this (or use an alternative format).

basic guiding principle: keep it *real* simple.  i won't go into full detail of the nightmare story of someone doing a less-functional "Model-View-Controller" replacement of 800 lines of python i did, once.  he wrote 16 *THOUSAND* lines of code - in 5 weeks.  for a simple "real-time CPU usage" web app.
Comment 205 Jock Tanner 2020-03-12 11:23:20 GMT
(In reply to Luke Kenneth Casson Leighton from comment #204)
> ermmm ermermerm i like the idea of keeping things in an external file
> 
> *thinks*... actually, CSV format would be better, because it's 3 lines of
> code:
> 
>     import csv # standard python library
>     with open(file_path, 'r') as csvfile:
>         return list(csv.DictReader(csvfile))
> 
> remember, we need to minimise the dependencies (see HDL_workflow, and see
> discussion we just had!)
> 
> in CSV format, it could be:
> 
> SIDE,pinname,offset,comment
> NORTH,clk,,,
> 
> the defaults would be, if there is no offset, to make one.  the order of
> the pins in the CSV file would define the order in which they appear on
> that side.
> 
> i think that would work really well.  later we can consider actually
> embedding
> the CSV file contents *into* the docstring (or a function) of the actual
> nmigen source code.

Looks like I implied too much.

First of all, I thought of keeping all the ioring logic intact as of a requirement, or at least as a generally good approach. That's why I didn't consider adding, say, CSV support into 'coriolis2.ioring' or 'Configuration.loadConfiguration()'.

Second, I don't think that introducing an additional file with a special format would do us any good. I suppose that 'coriolis2.ioring' is ok as it is for passing pad configuration between stages. If I am just not aware of the issues with it, please point me in a right direction.

What I did thought is necessary, is to eliminate the need for the user to configure the pads at all, or at least to configure them separately from top-level logic. I thought we can just set up formal rules (like putting pins with names ending on '_i' to the left, '_o' − to the right), and implement the reusable 'create_pads()' to inspect the list of pins according to these rules and generate 'coriolis2.ioring'.

Of course, I'm not yet proficient enough in the current workflow to see all the right implications. So I have to ask of you to specify my task.
Comment 206 Luke Kenneth Casson Leighton 2020-03-12 11:48:26 GMT
(In reply to Jock Tanner from comment #205)
> (In reply to Luke Kenneth Casson Leighton from comment #204)
> > ermmm ermermerm i like the idea of keeping things in an external file
> > 
> > *thinks*... actually, CSV format would be better, because it's 3 lines of
> > code:
> > 
> >     import csv # standard python library
> >     with open(file_path, 'r') as csvfile:
> >         return list(csv.DictReader(csvfile))
> > 
> > remember, we need to minimise the dependencies (see HDL_workflow, and see
> > discussion we just had!)
> > 
> > in CSV format, it could be:
> > 
> > SIDE,pinname,offset,comment
> > NORTH,clk,,,
> > 
> > the defaults would be, if there is no offset, to make one.  the order of
> > the pins in the CSV file would define the order in which they appear on
> > that side.
> > 
> > i think that would work really well.  later we can consider actually
> > embedding
> > the CSV file contents *into* the docstring (or a function) of the actual
> > nmigen source code.
> 
> Looks like I implied too much.

(nono, we're discussing ideas, here.  it's just that unlike other projects which can freely add arbitrary dependencies, we have to be careful and consider how a particular task may be achieved.  the suggestion of doing something that jinja would *be* suited to was great).

> First of all, I thought of keeping all the ioring logic intact as of a
> requirement, 

not a hard one.

> or at least as a generally good approach. 

yes.

> That's why I didn't
> consider adding, say, CSV support into 'coriolis2.ioring' or
> 'Configuration.loadConfiguration()'.

no that would be stepping on the toes of the coriolis2 project.
 
> Second, I don't think that introducing an additional file with a special
> format would do us any good. I suppose that 'coriolis2.ioring' is ok as it
> is for passing pad configuration between stages. If I am just not aware of
> the issues with it, please point me in a right direction.

the issue with ioring.py "as-is", is that the procedure for laying out the pads is entirely automatic, spaced-out evenly, because it's designed *exclusively* for doing Quad Flat Packages (QFPs).

what it can't do is something like this:

https://cdn.sparkfun.com/r/600-600/assets/7/a/6/9/c/51c0d009ce395feb33000000.jpg

that's an *uneven* distribution of the pads.

now, if ioring.py was capable of allowing us to specify that "signals a[0..15] are to be on the LEFT part of the NORTH side", then yes i would say it was perfect.

this is why i suggested being able to specify the precise location.


> What I did thought is necessary, is to eliminate the need for the user to
> configure the pads at all, or at least to configure them separately from
> top-level logic. I thought we can just set up formal rules (like putting
> pins with names ending on '_i' to the left, '_o' − to the right), and
> implement the reusable 'create_pads()' to inspect the list of pins according
> to these rules and generate 'coriolis2.ioring'.

hmmm.... that may actually be useful in some cases.  i think... i think we need a class of some kind, with the ability to derive from it and create alternative layouts.

> Of course, I'm not yet proficient enough in the current workflow to see all
> the right implications. So I have to ask of you to specify my task.

ok, let's discuss this on another bugreport.
Comment 207 Jean-Paul Chaput 2020-03-12 15:55:54 GMT
Created attachment 35 [details]
Coriolis2 ioring.py with explicit pad positioning
Comment 208 Jean-Paul Chaput 2020-03-12 16:15:11 GMT
(In reply to Jean-Paul.Chaput from comment #207)
> Created attachment 35 [details]
> Coriolis2 ioring.py with explicit pad positioning

> > Second, I don't think that introducing an additional file with a special
> > format would do us any good. I suppose that 'coriolis2.ioring' is ok as it
> > is for passing pad configuration between stages. If I am just not aware of
> > the issues with it, please point me in a right direction.
> 
> the issue with ioring.py "as-is", is that the procedure for laying out the
> pads is entirely automatic, spaced-out evenly, because it's designed
> *exclusively* for doing Quad Flat Packages (QFPs).
> 
> what it can't do is something like this:
> 
> https://cdn.sparkfun.com/r/600-600/assets/7/a/6/9/c/51c0d009ce395feb33000000.
> jpg
> 
> that's an *uneven* distribution of the pads.

  Wrong! ;-) You can specify the exact position of each pad with the ioring.py
  file. I did put an example in attachement #35. It is not part of the toolkit
  as the related example is the adder with AMS 350nm (under NDA).

  I did not completely follow the thead about CSV file, but in my experience,
  I don't see the need of introducing a new file format (even if its simple
  CSV), directly write in Python instead.

> > What I did thought is necessary, is to eliminate the need for the user to
> > configure the pads at all, or at least to configure them separately from
> > top-level logic. I thought we can just set up formal rules (like putting
> > pins with names ending on '_i' to the left, '_o' − to the right), and
> > implement the reusable 'create_pads()' to inspect the list of pins according
> > to these rules and generate 'coriolis2.ioring'.
> 
> hmmm.... that may actually be useful in some cases.  i think... i think we
> need a class of some kind, with the ability to derive from it and create
> alternative layouts.

  In my experience again, there is too many different cases so that an automated
  way works for for enough of them. And I'm very reluctant to put semantic in
  the pad names, it may clash with the HDL in too many ways.

   Lastly, if I'm not mistaken, you want to put information about the pads at
   nMigen level. I think it should be avoided. nMigen is for the design
   (behavioral or high leval description), and information pertaining to
   the layout should be kept at Coriolis level.
Comment 209 Luke Kenneth Casson Leighton 2020-03-12 17:14:51 GMT
i'm going to close this one as resolved as far as the tutorial part is concerned (and we'll work out who gets what), however if it's useful to do so, carry on discussing :)
Comment 210 Luke Kenneth Casson Leighton 2020-03-12 17:25:47 GMT
for this one, i'm going to suggest:

* EUR 250 to jock for the (crucial) questions and for helping get to buster 
* EUR 200 to tobias for the early checking of the tutorial
* EUR 700 to be donated to LIP6.fr for the huge amount of bugfixing by JP
* EUR 150 to Staf for his insights
* EUR 1200 to me for the persistent head-banging against brick walls

Cole, as long as you are in Europe, you helped find some early bugs in the tutorial page, and made some useful edits, however if you are outside of Europe, most of what NLNet would transfer to you would go in banking fees.  we can if you like work something out.
Comment 211 Tobias Platen 2020-03-15 19:57:17 GMT
200 for me looks good
Comment 212 Jock Tanner 2020-03-18 14:38:01 GMT
I don't know if I'm expected to react somehow, but for me it's fine. I only think if anyone could enlighten me about the due procedures.

(In reply to Luke Kenneth Casson Leighton from comment #210)
> for this one, i'm going to suggest:
> 
> * EUR 250 to jock for the (crucial) questions and for helping get to buster 
> * EUR 200 to tobias for the early checking of the tutorial
> * EUR 700 to be donated to LIP6.fr for the huge amount of bugfixing by JP
> * EUR 150 to Staf for his insights
> * EUR 1200 to me for the persistent head-banging against brick walls
> 
> Cole, as long as you are in Europe, you helped find some early bugs in the
> tutorial page, and made some useful edits, however if you are outside of
> Europe, most of what NLNet would transfer to you would go in banking fees. 
> we can if you like work something out.
Comment 213 Luke Kenneth Casson Leighton 2020-03-18 15:18:39 GMT
(In reply to Jock Tanner from comment #212)
> I don't know if I'm expected to react somehow, but for me it's fine. I only
> think if anyone could enlighten me about the due procedures.

ok see https://libre-riscv.org/about_us/ create yourself a user-page
on the wiki (template one of the existing ones, use mine as i have
more "headings"), then put EUR amount in a "pending" section, with
the heading "NLNet 2019 Coriolis2 Layout proposal 2019-10-029" and
put the date on that as well.

the RFP will go in to NLNet once i hear back from them.  are you in
Europe?  if so it's ok to put in small(ish) amounts.

(In reply to Tobias Platen from comment #211)
> 200 for me looks good

great, can you do the same as jock?
Comment 214 Jock Tanner 2020-03-18 22:47:19 GMT
(In reply to Luke Kenneth Casson Leighton from comment #213)
> ok see https://libre-riscv.org/about_us/ create yourself a user-page
> on the wiki (template one of the existing ones, use mine as i have
> more "headings"), then put EUR amount in a "pending" section, with
> the heading "NLNet 2019 Coriolis2 Layout proposal 2019-10-029" and
> put the date on that as well.

Done.
 
> the RFP will go in to NLNet once i hear back from them.  are you in
> Europe?  if so it's ok to put in small(ish) amounts.

I live in Russia. I have an account in a local bank. I can receive international bank transfers. I understand that nonprofit organization can not pay bank charges for me. But exactly how terrible those charges are?
Comment 215 Luke Kenneth Casson Leighton 2020-03-18 23:41:57 GMT
(In reply to Jock Tanner from comment #214)
> (In reply to Luke Kenneth Casson Leighton from comment #213)
> > ok see https://libre-riscv.org/about_us/ create yourself a user-page
> > on the wiki (template one of the existing ones, use mine as i have
> > more "headings"), then put EUR amount in a "pending" section, with
> > the heading "NLNet 2019 Coriolis2 Layout proposal 2019-10-029" and
> > put the date on that as well.
> 
> Done.
>  
> > the RFP will go in to NLNet once i hear back from them.  are you in
> > Europe?  if so it's ok to put in small(ish) amounts.
> 
> I live in Russia. I have an account in a local bank. I can receive
> international bank transfers. I understand that nonprofit organization can
> not pay bank charges for me. But exactly how terrible those charges are?

we should probably wait until you accumulate arouuund... EUR 1500 minimum, then.  NLnet are asking if they would like to support us, or if they woykd like to support the banks.

:)
Comment 216 Cole Poirier 2020-04-21 22:00:57 BST
Luke, do you want gcc-9-powerpc64-linux-gnu installed inside of the chroot? The reason I ask is because it involves changing /etc/apt/sources.list from buster to testing or unstable in order to install via apt, something that I presume is trivial in the chroot, but likely not as feasible on the host system. If I am wrong in any of my assumptions please clarify, I'm still very, very new to linux.
Comment 217 Luke Kenneth Casson Leighton 2020-04-21 22:09:22 BST
(In reply to Cole Poirier from comment #216)
> Luke, do you want gcc-9-powerpc64-linux-gnu installed inside of the chroot?

for coriolis2? we have no reason to compile executables (yet).

> The reason I ask is because it involves changing /etc/apt/sources.list from
> buster to testing or unstable in order to install via apt, 

yeah that's not a good idea.  the coriolis2 chroot absolutely has to be
a reproducible build.  debian/testing is the absolute opposite of that.

> something that I presume is trivial in the chroot, 

it's trivial on both, however the results would be absolute disaster.
i do not know if you have ever tried to compile software on a moving
target, it is absolute hell.

if we absolutely have to install gcc for powerpc in the chroot, it would
be better to do so from a full source code build.
Comment 218 Cole Poirier 2020-04-21 22:15:06 BST
(In reply to Luke Kenneth Casson Leighton from comment #217)
> (In reply to Cole Poirier from comment #216)
> > Luke, do you want gcc-9-powerpc64-linux-gnu installed inside of the chroot?
> 
> for coriolis2? we have no reason to compile executables (yet).

I've left out your comments regarding compiling for a moving target for brevity, but I definitely take your point. The reason I ask is that I'm following the HDL workflow, and there is a section instructing the user to install gcc-9-powerpc.

```
Section 6.6 qemu, cross-compilers, gdb

As we are doing POWER ISA, POWER ISA compilers, toolchains and emulators are required.

Install powerpc64 gcc:

apt-get install gcc-9-powerpc64-linux-gnu

Install qemu:

apt-get install qemu-system-ppc

Install gdb from source. Obtain the latest tarball, unpack it, then:

cd gdb-9.1 (or other location)
mkdir build
cd build
 ../configure --srcdir=.. --host=x86_64-linux --target=powerpc64-linux-gnu
make -j16
make install
```
Comment 219 Luke Kenneth Casson Leighton 2020-04-21 22:28:58 BST
(In reply to Cole Poirier from comment #218)
> (In reply to Luke Kenneth Casson Leighton from comment #217)
> > (In reply to Cole Poirier from comment #216)
> > > Luke, do you want gcc-9-powerpc64-linux-gnu installed inside of the chroot?
> > 
> > for coriolis2? we have no reason to compile executables (yet).
> 
> I've left out your comments regarding compiling for a moving target for
> brevity, but I definitely take your point. The reason I ask is that I'm
> following the HDL workflow, 

ok.  then this bugreport, which is not for HDL_workflow, it is for coriolis2,
is not the right place.

plus, this bugreport (related to coriolis2) has been closed.

> and there is a section instructing the user to
> install gcc-9-powerpc.
> 
> ```
> Section 6.6 qemu, cross-compilers, gdb
> 
> As we are doing POWER ISA, POWER ISA compilers, toolchains and emulators are
> required.
> 
> Install powerpc64 gcc:
> 
> apt-get install gcc-9-powerpc64-linux-gnu

that's only available in debian/testing.

gcc-8-powerpc64-linux-gnu is what is available in debian/10.