Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initial Ti180 port from Ti60, overloading add_sdram #441

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

samh-efx
Copy link

@enjoy-digital Hi Florent, can you help try out this port through synthesis?

It works fine with vexriscv and vexriscv-smp (with the --with-wishbone-memory flag) so the wishbone-to-axi path using axi point2point interconnect is fine.

With cores like Rocket or Naxriscv it fails some bit-slicing assertion during the fhdl generation phase;
When I force comment out those for try out, it fails during synthesis due to some slicing out of range.
So I'm guessing I did some of the address width wrong for the Wishbone2AXI,
or the way I did bus region decoding for slaves of AXICrossbar is wrong ?

@enjoy-digital
Copy link
Member

Hi @samh-efx,

I indeed reproduced the issue. Instead of searching the real reasons, I did some improvements to LiteX to simplify creating additionnal interconnects with: enjoy-digital/litex@13448b8

This now allow your custom interconnect to be described like this:

        xbar_slaves = self.add_sdram_io(platform)
        self.xbar_bus = SoCBusHandler(
            name             = "SoCXBARBusHandler",
            standard         = "axi",
            data_width       = 512,
            address_width    = 32,
            bursting         = True
        )
        for master in xbar_masters:
            self.xbar_bus.add_master(master=master)
        self.xbar_bus.add_slave("main_ram", slave=xbar_slaves["main_ram"], region=SoCRegion(origin=0x00000000, size=0x100000000)) # FIXME: covers lower 4GB only
        self.xbar_bus.finalize()

With this, ./efinix_titanium_ti180_m484_dev_kit.py --cpu-type=naxriscv --build is still building here and hasn't crashed yet :)

@enjoy-digital
Copy link
Member

enjoy-digital commented Nov 14, 2022

@samh-efx: The best for your design would probably to setup a simulation environment with LiteX (similar to what we are doing with litex_sim but with LiteDRAM generated as a standalone core, integratiing the simulation model (--sim) and with an AXI user port. This would allow you to do a simulation and of the different CPUs with Verilator and verify that the generated logic is correct before testing on hardware. I have limited time to do this currently, but please contact me directly if would be interested to speed this up and get more support on this.

Comment on lines +64 to +69
("user_led", 0, Pins("E1"), IOStandard("3.3_V_LVTTL_/_LVCMOS"), Misc("DRIVE_STRENGTH=3")), # led2 GPIOB_N_02
("user_led", 1, Pins("F1"), IOStandard("3.3_V_LVTTL_/_LVCMOS"), Misc("DRIVE_STRENGTH=3")), # led3 GPIOB_P_02
("user_led", 2, Pins("C2"), IOStandard("3.3_V_LVTTL_/_LVCMOS"), Misc("DRIVE_STRENGTH=3")), # led4 GPIOB_P_13
("user_led", 3, Pins("E3"), IOStandard("3.3_V_LVTTL_/_LVCMOS"), Misc("DRIVE_STRENGTH=3")), # led5 GPIOB_P_14
("user_led", 4, Pins("B1"), IOStandard("3.3_V_LVTTL_/_LVCMOS"), Misc("DRIVE_STRENGTH=3")), # led6 GPIOB_N_11
("user_led", 5, Pins("B2"), IOStandard("3.3_V_LVTTL_/_LVCMOS"), Misc("DRIVE_STRENGTH=3")), # led7 GPIOB_P_12
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On Titanium, this needs to be spelled 3.3_V_LVCMOS and have a multiple of two DRIVE_STRENGTH, right?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, for that matter, the bank these are on is 1.8V on Ti180J484 EVK; is that the same, or different, on M484 EVK? It looks like most of the other pins are the same.


if args.flash:
from litex.build.openfpgaloader import OpenFPGALoader
prog = OpenFPGALoader("titanium_ti180_m484")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This board isn't upstream yet, is it? If this works for you, any chance I could get early access to it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants