Thursday, March 30, 2023

Clean build again, time to move onward to Linux side development and testing of the board

BUILT NEW PROJECT AND WORKED THROUGH VARIOUS ERROR MESSAGES

I carried over the files produced by SystemBuilder, used them to open the project in Quartus, added in all my files, updated the top level file and generated the SoC in Platform Designer. I then began rounds of compilation, knocking down various minor errors that arose.

REVERTED TO ISSUE WITH THE TCL SCRIPTS NOT WORKING PROPERLY

It is typical of all toolchains for FPGA, at least the ones from Xilinx, Altera and Intel, that the internal files beneath the GUI can get in a confused state, such that residue of old logic or parameters remain in the files and cause obscure problems when trying to generate results. 

At least with Intel/Altera, these are reasonably well described and can be read with an editor like Notepad or Linux vi/ed/etc. It does allow me to look into the files and spot why the errors are arising, although the toolchain won't easily correct them. I have found that doing a clean of the files will cause the toolchain to rebuild them all from the current contents entered on the files visible through the GUI, thus resolving the stale data problem.

After a clean and some careful review of my code, I did a compilation. The first pass produces fitter errors because the pins and parameters for the sdram connections are not set up; the TCL scripts are run to establish these settings. The second try of compilation, after script execution, should complete successfully. It did not. The TCL script was still failing SILENTLY and not adding the sdram pin assignments.

I manually appended them to the settings file xxxx.qsf to complete the assignments. Next I added IO Buffers for the signals coming from and going to the HPS side (the three HPS_GPIO signals) and did a compile. 

There were some battles with hidden tristate connections for a signal I wanted to connect to the FPGA IO connector and route that to the HPS side to be read by the Linux application. After I abandoned attempts to use the HPS GPIO pins, it completed the build without error.

THE CHALLENGE OF OVERLOADED WORDS

Altera, Intel and Terasic toss around the term GPIO in a number of contexts with different meanings or subtleties, which is quite confusing. Having read around 6000 pages of documentation to date, I recognize the places where this term switches meaning but it was hard fought knowledge, not spelled out anywhere clearly.

FPGAs have a building block called a General Purpose Input Output block - GPIO - which is basically all the normal pins that you would connect to other chips or the outside world. It includes debouncing, control over drive strength and slew rate, voltage standard selection and other capabilities.

The Cyclone V chip on its HPS side also has GPIOs - a facility that routes 67 bidirectional signals from the HPS side over to the FPGA side, but only when the peripheral mux bits are set. Essentially every pin (pad on the chip) has multiple possible connections, up to four, selected by the mux bits. A given pad might connect to an I2C serial signal, to another device on the HPS, or to the GPIO facility. Note this is NOT the same GPIO as the paragraph above.

The documentation on the HPS GPIO function shows that the signals are assigned across three GPIO banks - 0, 1 and 2. Signals 0 to 28 are on bank 0, signals 29 to 57 are on bank 1, and the remaining signals are on bank 2. 

Terasic provides two 40 pin connectors on the DE10-Nano board which is based on the Cyclone V chip. These are called GPIO 0 and GPIO 1. Each of the two provide 36 input/output pins as well as power and ground. This is a third use of the term GPIO. The HPS GPIO signals have no routing to the board GPIO connectors. The only way to connect them is via IO buffer functions in the FPGA side, using a signal passed through the hps_io conduit from the HPS side which can be redriven out a GPIO pin (of your choice). 

Just to add insult to injury, Terasic provides a single pin on the 2 x 7 pin LTC connector on the HPS side which is called a GPIO signal. That is a fourth use of the term. This GPIO signal is connected to one of the HPS GPIO signals, but also acts as a drive to a relay to switch contacts on the LTC connector between the SPI and the I2C functions on the SoC chip. 

SUCCESS IN BUILD

After some time tweaking things, I had a successfully built file for the Field Programmable Gate Array (FPGA) side of the System on a Chip (SoC). It was time to install some C code onto the Linux image for the Hard Processor System (HPS) side of the SoC then fire up the board. 

Monday, March 27, 2023

Minor issue with Quartus build of my SoC design - damned toolchains again

CHECKING IO PIN ASSIGNMENTS AND FOUND MISMATCH

As part of my desk checking, I was cross checking the external connector pins I had assigned to the logical signals within my Field Programmable Gate Array (FPGA) and the Hard Processor System (HPS) sides of the System on a Chip installed on this board. The manual for the board lists the SoC chip pins associated with each connector pin, but I quickly discovered that these did not match. 

There were very serious mismatches, as for instance the main clock input to the SoC was on the wrong pin. As a result, the board would have sat there mutely doing nothing. After double checking that the SoC chip was the Cyclone V 5SEBA6U2317 which was configured in Quartus toolchain, I began searching the web for others experiencing mismatched assignments.

Voila - I quickly found that the assignment of these is either done manually or by a tool provided by Terasic - SystemBuilder. It updates the xxxx.qsf file in Quartus to make various assignments for IO pins. This step was not done on my project, for a very good reason. 

THESE ASSIGNMENTS SET BY TERASIC SYSTEMBUILDER TOOL - IN WINDOWS

The tool provided by Terasic is a windows executable. It does not have a Linux counterpart. It also produces a Verilog top level file, rather than VHDL as I would have preferred, but I long ago rolled with the punches to use the top level as it was.

Thus I moved all my VHDL files and the top level Verilog file over to the Linux instance, created the new project and then set up my SoC and other logic statements. This did not carry over the contents produced by SystemBuilder in the xxx.qsf file, thus I didn't have the proper assignments. 

FOLDING IN SYSTEM BUILDER STATEMENTS INTO MY FILES ON LINUX

I used SystemBuilder to create a dummy project with its high level file and xxxx.qsf file. That file was transported via USB drive over to my Ubuntu image and I merged this with my project. The proof of the pudding will be proper pin assignments for all the important signals. 

REBUILD OF THE DESIGN

Alas now the toolchain can't see my SoC design from Platform Designer (QSYS), thus it can't do the compile process properly. It appears that I must go all the way back to step 1, using the output file from SystemBuilder on my Windows machine as the input to a fresh project. I can then import all my other VHDL modules and tediously set up the SoC in Platform Designer. 

This is the kind of issue that arises from a fragile toolchain that is a patchwork of layers and add-ons. The way forward is very clear and success is assured, so I will be starting over now. 

Sunday, March 26, 2023

Success - full build of the logic for the SoC with my new Linux image

SUCCESSFUL SYNTHESIS, ROUTE, PLACE AND BIT GENERATION

This is a quick blog post to let everyone know that I was able, using the newly created Ubuntu virtual machine and newly installed Quartus Prime 22.1 toolchain, to complete a build of my logic for the FPGA and SoC side of the design. Finally!

NEXT STEPS

At this point I want to go over all the signal pathways between modules, to be sure that everything is linked as I intended. I will also carefully examine the messages produced in order to clean up things like unused signals or unintended latches. 


Saturday, March 25, 2023

While fighting the USB drive access by VMWare, the virtual machine came back!

REBUILT THE UBUNTU LINUX IMAGE AND RETURNED TO QUARTUS WORK

Although the virtual machine did resume booting up and Quartus would launch, I thought that the condition of the files on it was suspect. I used a USB Drive to retrieve the entire project directory but encountered a number of errors trying to read certain files and directories, which reinforced my assessment that this virtual machine image wasn't usable

I set up an entirely new Ubuntu image and installed Quartus on it, going carefully through the many scattered documents on the Intel site. One listed quite a few libraries that should be added to Ubuntu Enterprise, with no separate guidance about Ubuntu Desktop which I was running, all of which I added or updated before installing Quartus.

The image is up, I finished up the fixes for the environmental variables that I had previously had to discover. At this point the system appears to be working properly.

RECREATED THE PROJECT ON MY NEW LINUX IMAGE

I moved all the source files over and then used Platform Designer (QSYS) to create the System on a Chip (SoC) and link it into my FPGA code. I am now in the process of working through various issues getting everything compiled successfully.


Thursday, March 23, 2023

Preparing USB bootable Ubuntu to attempt to recover the VMWare disk image

UBUNTU DOWNLOADED AND PUT ON USB DRIVE

I built a USB drive using Rufus to load the Ubuntu ISO file in order to boot the virtual machine with it. That would give me a working Linux system with which I could run the fsck utility to check and repair the virtual disk drive. It may not be possible but I hope it will work so that I can recover my work. 

ATTACH THE USB DRIVE TO THE VMWARE IMAGE IN ORDER TO BOOT IT

Next issue - of course - is that VMWare Player is not seeing the thumb drive, although it has successfully seen all my other drives when inserted. I am now chasing down information about correcting this to boot up. Alternatively I can load the Ubuntu image and access it as a virtual floppy, I guess.


Wednesday, March 22, 2023

Close, too close for the fates to allow, thus . . .

WORKED THROUGH THE TCL SCRIPT ISSUES AND ABLE TO WORK ON COMPILATION

I eventually sorted out the environmental variables and paths in order to get the two key scripts, hps_sdram_p0_parameters.tcl and hps_sdram_p0_pin_assignments.tcl, to execute successfully. As a consequence the bizarre errors of the fitter process of compilation were not occurring. 

DISCOVERED THAT THE GPIO SIGNALS FROM HPS ARE HARD ASSIGNED

The Hard Processor System (HPS) side of the chip has a peripheral multiplexing capability that allows some signals to be routed as LoanerIO or GPIO to the Field Programmable Gate Array (FPGA) side. There are almost 70 possible GPIO or LoanerIO numbers to be assigned. 

Since I was going to route signals related to driving the LCD touch module, I put the GPIO signals close to the Serial Peripheral Interface (SPI) master signals, which gave them numbers in the 60s. I assumed that on the FPGA side this was just a signal that I could then assign to some real FPGA side connector pin in an arbitrary manner.

I came across a small section in one of the manuals (of the tens of thousands of pages of such documents that are essential to reference to work with the System on a Chip - SoC) that showed me a fixed assignment of these almost 70 numbers from the HPS side to the connector pins on the FPGA side. For example, the signals in the 60s are routed to GPIO Bank 2, which is NOT implemented on the board I am using. 

I changed these down to the range of numbers that correspond to the second 40 pin connector block on my board. Thus, GPIO-29 in the HPS side is GPIO Bank 1 pin 0 on the FPGA side. Some of the errors I was experiencing were due to the use of Bank 2 numbers. 

NEARING FINAL STROKES, I THINK

With these sorted out, I just had a few minor HDL issues to repair and I expect that I can finish the Compilation to produce the binary load file that will configure the SoC chip on my board. I had a meeting in a restaurant, after which I will complete the work, or so I thought. 

ALAS IT IS NOT TO BE

I returned, fired up VMWare Player, and attempted to start my Ubuntu image. It failed to activate due to a corrupt Virtual disk image. Unless I can find a way to recover this, which doesn't sound promising from the Google searches I have done so far, then I am going to have to back way up, set up another Ubuntu virtual machine, install Quartus again, then do all the fixes if I can remember them to make it operational.

This won't stop me in the long term but it does senselessly waste a day or so of effort fighting my way through all those steps to return to this afternoon's state.

Tuesday, March 21, 2023

Debugging and repairing the Quartus software on Ubuntu Linux

ENVIRONMENTAL VARIABLES NOT FULLY SET

The installer updated one environmental variable but left another unset and did not include the Quartus executables in the PATH variable. I worked through that and was able to have all the components of Quartus execute properly. 

ERROR MESSAGES FINDING TEMPORARY FILES DURING QSYS

When you complete using Platform Designer (formerly called Qsys), the resulting design that ties the Field Programmable Gate Array (FPGA) side with the Hard Processor System (HPS) side has to be generated. In that process, TCL scripts are run and these failed for several intellectual property elements (i.e. not code I wrote or control). 

After a bit of digging and debugging I found that it was failing to open some temporary file, which I presume should have been written earlier in the generation process. I got lucky and found a match with a Google search which referenced another post that clarified the problem. Quartus depends on a number of Linux facilities being installed, but does NOT check during installation nor throw an intelligible error. 

NOW ABLE TO GENERATE THE SOC CORE FILES

I updated my Ubuntu system with the (secret) requisite functions and the generation completed fully successfully. That is the process which also generates the TCL scripts that have famously failed for me on all the other releases and OS platforms. Fingers crossed, it would now compile and run those scripts. 

COMPILATION OF MY LOGIC STILL FAILING ON SCRIPT ERRORS

Regardless of the OS or the Quartus version, I get the same error attempting to execute the provided TCL script. Right at the start of the script it attempts to run the SDC_EXT package which is rejected because that can only be loaded from STA or FIT packages. As far as I understand the intent of Quartus, it should be running STA for the command line and any scripts. Trying to manually load STA results in an error that no such package exists.

I see a few possible causes for this inept behavior of the installed Quartus product (not back at Intel but when running on my various systems):

  1. Some path or environmental variable is missing which lets the software find the STA package
  2. Permissions are incorrect for the package or its path
  3. The installer failed to install those packages
  4. Configuration of Quartus is incorrect such that it doesn't execute STA for the console
  5. Another secret prerequisite is some specific older version of TCL instead of the version installed on my various Windows and Linux systems
WHY THE SCRIPTS MATTER

The Quartus system builds these helpful but somewhat opaque scripts to set up all the parameters and pin assignments that are needed to build the FPGA logic. The DDR3 RAM, for example, has a myriad of pin assignments, voltages, modes and other settings that are essential in order to have the ram access work properly. 

Until these scripts are run and thereby configure these settings for the FPGA, the router/fitter logic can't implement the logic that was synthesized from your HDL plus all the IP HDL from Altera/Intel that glues everything together. This is where I was failing, with issues placing components and discrepancies in voltage levels. I set up the HPS side DRAM for 1.25V logic but the error messages showed pins set to 2.5V, clearly residual values which would have been overridden had the script executed properly. 

SEEKING OUT THE STA PACKAGE TO ELIMINATE (3) AND INFORMS FUTURE DIAGNOSIS

I do indeed see the STA executable as well as the general TCL library of packages such as SDC_EXT, so they were installed. One avenue would be to dig into whether there is a missing environmental variable or something wrong in the path permissions. 

NEXT STEPS DIGGING INTO THE FAILURE

I discovered that I could execute some scripts by using the command line and issuing quartus_sta with a parameter of the TCL script. I was able to run the offending script that had failed before, but it still had not set all the pin assignments and slew rates required for the fitter to assign logic to locations in the FPGA. 

I tried to run the script hps_sdram_p0_pin_assignments.tcl from command line with quartus_sta but it kept complaining about invalid project name, a required parameter on the command line for the script. I feel that I am getting closer and closer to the point where I can produce a binary file to load the FPGA with my logic. 

SCRIPT EXECUTION ATTEMPT NUMBER 999 - RESULTS xxxxxxxxxx

xxxxxxxxxx

Monday, March 20, 2023

More on the installation software for Quartus on Linux

UBUNTU IS A SUPPORTED LINUX FOR THE INTEL SOFTWARE

I looked over the support requirements for the Quartus toolchain and see that Ubuntu Linux is supported for all the releases I am trying. I therefore don't need to fool around adding yet another Linux image under VMware under Windows on my laptop. 

BOTH RELEASE 17.0 AND 17.1 MALFUNCTION

Both of those two oldest available releases produce the same hang when running on Ubuntu. The installers are in graphical mode and at the end of certain progress windows, I get pop up errors that the program is unresponsive. No amount of selecting "wait" leads to it completing the install. 

INSTALLED LATEST RELEASE 22.1 ON UBUNTU 

I went to the other extreme, the current version of Quartus, and attempted the installation. It did seem to install properly, although it did not set up environmental variables required for the application to execute. I will fiddle with the system in order to fire it up and see whether it is able to successfully execute the TCL scripts that were the roadblock on my Windows versions. 

Sunday, March 19, 2023

Hail Mary, attempt 2 - install toolchain on Ubuntu Linux under VMWare on my laptop

MAYBE ITS JUST A WINDOWS THING

Layering Unix under Windows on top of the other layers of the toolchain may be the cause of this. I will shift over to a native Unix environment for the toolchain, although not the official Red Hat Linux they list under supported environments. 

VMWARE ON MY LAPTOP, UBUNTU LINUX IMAGE, DOWNLOAD QUARTUS

My second laptop now has VMWare Player installed and a Linux image. Initially I only had 30GB of disk configured, which was not enough to install the bloated Intel toolchain. After expanding the disk, I attempted an install of Quartus Prime. 

Fresh VMWare, fresh Ubuntu, but the Intel installer hung up at the end of installing the Prime help and couldn't recover. Happened twice after cleaning up the first try. Once again, shoddy toolchains become a major barrier. 

I guess it is time to install the exact Red Hat Linux listed by Intel as the supported environment, to give them the best possible chance to deliver a working product. 

Saturday, March 18, 2023

Fixed one issue but still have problems with Quartus

A HINT ABOUT ENVIRONMENTAL VARIABLES IDENTIFIES ONE CAUSE

I had previously installed the 22.0 version of Quartus Prime (the current one), then deinstalled it in order to install 17.0. The uninstaller failed to remove the old environmental variables pointing at 22.0 locations. The installer for 17.0 failed to correct that, leaving me with variables pointing to a nonexistent directory.

Repairing those directories improved things, in that the behavior of the second TCL script which appeared to work earlier now generates many more messages about actions it has taken, indicating that it works better. Alas, the third (pin map) script still fails and the TCL installation still is missing the relevant packages necessary to proceed further. 

DOING AN UNINSTALL WITH CLOSE INSPECTION AFTERWARDS

Did an uninstall of Quartus and found I still had two other related products. I kept hunting for uninstallers until the directory was empty. I was left with the Arm design studio after exhausting all the Intel provided uninstallers. I used the Windows control panel to do that uninstall. The last step was to locate and delete the environmental variables that were left in the system. 

RERUN THE SCRIPT

Completely clean, the Quartus toolkit exhibited EXACTLY the same error as before. It is possible that this is a consequence of the inevitable screwing around that vendors do with every release of their software, causing something that worked fine, say, in release 16.0 to fail in 17.0 because of the changes. 

Sadly, Intel has pulled 16.0 such that, even though there are still devices (Cyclone V and the Terasic boards) which are supported by the release, they won't even offer it for 'as is' download. If this is the problem, I am not seeing signs of it from web searches. 

More toolchain hell

MOVED TO OTHER LAPTOP BUT NEWER QUARTUS CAUSES DIFFERENT PROBLEMS

The other laptop had Quartus 22 installed instead of the 17.0 I used on the original laptop. I chose the older release as I found incompatibilities that were insurmountable based on changes Intel made from 17 to 22, where the demonstration files from Terasic, based on 16, would not successfully create any demonstration examples. 

The first issue that cropped up was a request to upgrade the IP in my project, which is the SoC component to be generated as well as all the bridges and clock modules I included. Once these were updated I tried to regenerate the SoC core but received the error that I needed Windows Support for Linux in order to proceed. That is already installed and active on my system. I remember this was one of the drivers for falling back to Quartus 17 on the other laptop. Ergo I can't proceed on the second laptop. 

POKING INTO THE TCL SYSTEM USING THE CONSOLE PROVIDED BY QUARTUS

I started to hand execute the TCL script that was failing, the one that will do the pin mapping. Indeed it fails because it tries to load the package SDC_EXT which complains that it must be called from Quartus_sta (or quartus_fit). 

I then issued the command to list all the packages available to load from TCL and did NOT see Quartus_sta, Quartus_fit nor SDC_Ext in the loaded list nor in the list available to load. This seems to be the crux of my issue, a defective setup of the packages that will be called from TCL.

IF THINGS WERE DOCUMENTED . . . .

If the role of this script and the pin mapping were documented, along with the files it chases through to collect its information, then I might be able to modify the target files myself with the proper information. However, they are not documented and therefore this avenue is closed off, barricaded and forever forbidden. .

HINT FROM MORE SEARCHING

There are some similar issues that suggest that it is root directory and environmental variables for Windows that, if not set correctly, can lead to these sorts of failures to load packages. Time to carefully study my environmental variables, the install location for Quartus and any other hint as to the defect.

Use of HPS peripheral mux to GPIO, plus another toolchain whacks me in the head

DISCOVERY ABOUT THE USE OF HPS ROUTED GPIO SIGNALS

In the final phases of routing all the signals and ensuring the various modules and external pins were properly connected, I discovered that the HPS side links to GPIO are intended solely to connect to the external FPGA GPIO pins, not to furnish signals inside the FPGA which could be acted upon.

This is absolutely good for my purposes, as those pins were mostly used to route connections to the LCD/touchscreen module I will use for the user interface. The one other pin is used to send a signal from the HPS to the IBM 1130 disk drive to indicate that the heads are loaded so that the drive can switch to "ready" status. There is no need for logic in the FPGA side to be involved.

TOOLCHAIN TOMFOOLERY, ACT SEVENTEEN

Faithful readers of the blog will remember that I switched over to the Quartus toolchain and a board using the Intel/Altera System on a Chip (SoC) when I had exhausted too many weeks fighting with the Xilinx Vivado toolchain. These problems are all issues with the function of the toolchain and have ZERO relationship to any of my logic or input, yet they block me from completing the generation of a project.

Now a problem has struck with the Quartus system and to date no Google searches have located any hints as to the cause nor the solution. The toolchain helpfully handles the details of the IO pin assignments and other parameters required to fully generate the SoC logic. Helpfully as in obscurely and with almost zero documentation. Helpfully as in failing with an internal error. 

When producing a design for an SoC device, there are a number of TCL scripts that must be run from Quartus to handle these pin details. The first two, setting parameters and pin assignments, worked well. The next one, responsible for 'pin mapping', failed. In the details I see that it has to do with requesting a TCL package SDC_EXT (performs Intel SoC related functions) which throws up errors stating that it can only be loaded from Quartus_STA or Quartus_FIT packages.

I issued a load package command to TCL to load Quartus_STA, which should already have been active, but it failed with an operating system error failing to load the package.dll file which I suspect is what does the actual loading of TCL packages.

There was no direct match to others having errors loading package.dll but some had similar errors loading other dll files related to Quartus TCL operation. Those suggested that it was an issue with Windows Visual C libraries requiring installation of the 2013 Visual C Redistributable packages. Those show as already installed on my system, but I was willing to reload to see if this fixed the problem.

As I mentioned in an earlier rant, these toolchains consist of Linux command line programs scaffolded with a GUI and then that environment is further scaffolded to run within Windows. Plenty of opportunity for things to go awry. 

The Microsoft downloads recognized their prior installation status and prompted me for a Repair action which I performed. And . . . zero change. Still fails to run the script to do the pin mapping, without which the Quartus fitter can't properly assign the synthesized logic to elements within the FPGA fabric. 

I will transfer all my files over to a different laptop which has Quartus installed, on the off chance that there is something corrupted on this system. Kind of a "Hail Mary pass" but it is all I can think of to move forward past the latest gratuitous toolchain brickwall. 

Wednesday, March 15, 2023

Well, almost correct solution

LOOKING FOR THE ACTUAL CONNECTOR PINS FOR CANBUS ETC

My great idea to simply reassign to connectors like the Canbus interface depends on their having been an external pin. There is not. Some of these functions may be routed to a connector, as is the case when either SPI or I2C is instantiated and automatically routed to the 2x7 LTC connector, but others have no physical instantiation. 

I still need to find external pins for HPS signals related to the LCD interface module as well as receiving signal HeadsLoaded to detect when the drive turns off. Not deterred, I looked deeper into all the capabilities they offer in the System on a Chip (SoC) IC. 

ANOTHER WAY TO EXPORT SIGNALS FROM HPS TO FPGA

The only logical function for which I needed to export signals was the Serial Peripheral Interface (SPI) master, which is controlled by its driver under Linux. The other signals are something that I will read or write via the General Purpose Input Output (GPIO) method I wrote of previously.

I therefore set up GPIO signals for the following:

  • select LCD panel
  • select touch screen
  • stream is data or command
  • reset LCD
  • HeadsLoaded signal
I found that the Qsys system exported those as conduits, signals I can access from the FPGA and therefore hook them to various external pins (confusingly also called GPIO). I have the five HPS side GPIO signals exported along with the SPI master clock, MOSI and MISO lines. These will all be connected to selected pins on the other type of GPIO, a physical connector on the board. 

GETTING THE INTEGRATED TOP LEVEL MODULE IN SHAPE TO BEGIN TESTING

Since the top level module produced by this toolchain is in Verilog, but I am most familiar with VHDL, I have to create instantiation templates for every one of my VHDL modules, which is a somewhat tedious process of capturing the entity declaration from VHDL and then modifying it to fit the Verilog format as an instantiation. 

Next up was the routing of signals between the various modules. Each external signal from the IBM 1130 requires synchronization to avoid metastable conditions, passing it through a chain of flipflops clocked by the FPGA to get the edges aligned with our clock rather than the autonomous 1130 clock. 

Tuesday, March 14, 2023

Great discovery solving my IO pin shortage

REASSIGNABLE EXTERNAL CONNECTIONS VIA PERIPHERAL MUX TABLE

The Qsys utility of the Quartus toolchain is where the System on a Chip (SoC) is configured particularly the Hard Processor System (HPS) and its links to the Field Programmable Gate Array (FPGA) side. I discovered a great feature as part of that tool.

One aspect of configuring the HPS within that tool is setting up the peripheral pins you will use, such as the ethernet or Serial Peripheral Interface (SPI) functions. At the bottom of that screen is the peripheral mux table. The table lists every external connector pin of the chip as rows of the table, and within the column it shows alternative functions that could be routed to the pin. 

At first I thought of this as a warning, since very function can't be configured simultaneously. One of the columns of that table is labeled Loaner IO. After a bit of reading, I realized that you could select any external connector pin that wasn't in use by a selected function of your design and route it to the Loaner IO. 

Loaner IO is a bus that is connected across to the FPGA side. For each of the 67 rows of pins, this bus allows the FPGA side to set the pin as input or output and to access the signal from the FPGA. This offers the ability to give the FPGA side even more pins than it has directly connected. The reverse of what I needed but it was a hint that what I needed was indeed possible. 

GPIO ACCESS IN LINUX

The corresponding column in the peripheral mux table to the Loaner IO is the GPIO signals. These are signals that can be accessed from within Linux as long as you selected the external connector pin to be routed to its GPIO peer. They appear to the FPGA as tri-state connections, thus I can ignore that end and access it purely from my Linux application. 

SPI MASTER HAS TWO VASSAL SELECTS AVAILABLE, LCD AND TOUCH

The included SPI Master 0 on the SoC offers two select pins, thus it can directly select between two xxx SPI devices using the Linux driver. One will control the LCD panel and the other controls the touch screen that sits over the panel. These are standard assignments which don't require any special actions to enable. Note that using the second select pin takes over one of the UART external connections, thus we can't have UART and the dual select SPI master at the same time. 

TOOK OVER UART AND CANBUS PINS VIA LOANER IO FOR REMAINING SIGNALS

The second UART connector pin I routed to a GPIO that I will write from my application, to control the LCD Command/Data pin of the LCD module. I picked one of the external pins normally tied to the Canbus link for my HeadsLoaded signal which I can read from the application to verify that the physical drive has not spun down. 

BOTTOM LINE ADVANTAGES

I no longer need to solder onto the USER KEY button on the HPS side, nor to use bridge signaling to pass signals across sides. The mux table plus GPIO gives me the ability to get the extra signals to the HPS side. 

Monday, March 13, 2023

Update on LCD module connection ideas

GOOD NEWS - CAN PICK ANY FPGA SIDE IO PIN TO TRIGGER LINUX INTERRUPTS

The System on Chip (SoC) integrated circuit includes a handy feature to deliver up to 32 interrupts to the Hard Processor System (HPS) side to be handled by Linux, allowing me to route a signal from the Field Programmable Gate Array (FPGA) side including from external connector pins. Thus, I can route the Touch Screen interrupt over a general purpose I/O pin on the FPGA.

From a software side, I need to register an interrupt service routine (ISR) for the interrupt that will be seen when my I/O pin goes active from the Touch Screen. Interrupt numbers from 73 upwards are assigned to the block routed from the FPGA side. I have example code thus I expect this to be relatively painless to accomplish. One signal down, several still to go. 

BAD NEWS - STILL HAVE SHORTAGE OF HPS SIDE GPIO PINS

That still leaves me with a deficit of external connections to the HPS side. I need two pins to select the two devices - LCD and touchscreen - of the module. I need another signal to indicate when a particular communication is data and when it is a command to the LCD. Why not route the HeadLoad signal the same way, as it is probably just as easy to route five as four. That does mean I don't need to tack onto the User Key pushbutton at least. 

HOW I THINK THIS CAN BE HANDLED

Four of these signals are outputs - the HPS side will write to these and we just need them to get to an external connector. We can make the H2F bridge responsible for sending these bits in addition to its role reading and writing blocks of four 16 bit words for cartridge loading and unloading. All we need is to assign unique addresses to the two roles, use address range splitting to route the write transaction from the HPS side to a new bridge and state machine to accept and output those signals on the I/O pins. 

The HeadLoad signal is, however, an input not an output. Therefore the H2F bridge is not ideal for this because it is the Linux side that decides when to read or write, possibly missing a drop of the signal for a while. There are two methods that come to mind for dealing with this input signal. 

One is to continue with my plan to wire to the User Key pushbutton on the HPS side. That way the Linux program can simply read the state of that 'pushbutton' to see whether the heads are still loaded or not. That way the Linux program can check often and immediately send the command to turn off drive_ram

The other way is to update my H2F LW bridge to interrogate the state of drive_ram by reading to a new dedicated address different from the Status word. This runs the risk of extended delay recognizing the signal having gone off. I can minimize the impact by ensuring that the FPGA side immediately shuts off drive_ram and not waiting for the HPS side to send that command. 

Wiring, pin assignments, and new LCD screen for user interface

ISSUE OF CONNECTIVITY TO MINIMIZE OR ELIMINATE BRIDGE USE

The Terasic DE10-Nano board I am using has a wealth of connections and devices on the board, but those devices are connected to just one of the two sides of the System on Chip (SoC) Cyclone-V integrated circuit. Some are on one side, some are on the other. 

Aspects of my design are best done on one side or the other. For example, the detailed hardware interaction with the IBM 1130 disk controller is best done in the Field Programmable Gate Array (FPGA) side while the user interactions to select virtual 2315 disk cartridges are best done under Linux on the Hard Processor System (HPS) side. 

Maintaining a screen buffer and driving an LCD screen is much easier in software, thus on the HPS side, but some screens are driven by parallel I/O pins that are only wired to the FPGA side of this board. Serial interfaces such as SPI, I2C and UART are wired to the HPS side.

It is possible to access connections on the other side, but one has to make use of the bridges between sides such as HPS2FPGA, FPGA2HPS and HPS2FPGA-Lightweight which adds asynchrony, delays and complication compared to using a connection from the side where it is terminated. I therefore looked to locate my signals and external devices on the appropriate side of the SoC to minimize or even eliminate the bridge traffic. 

NEED SIGNAL FROM 1130 TO HPS SIDE FOR HEAD LOADED STATUS

Almost every signal between the IBM 1130 disk controller and my board should be connected to the logic in the FPGA. There is one signal, however, that should also be visible to the HPS side. This is the HeadLoaded signal, which goes on when the drive is spun up, thus go ready to do input-output, and turns off when the drive is switched off to remove the physical cartridge. 

My design has the HPS side application control whether the FPGA logic is activating the drive modeling (signal drive_ram) or is available to load and unload virtual cartridge images. A command from HPS is sent when we see the HeadLoaded signal go on to tell the FPGA to begin emulating the disk drive. 

At the end of a session using a virtual cartridge on the disk drive, the user turns off the drive motor and our HeadLoaded signal goes off. At this time our SoC board contains the image of that cartridge which may have had data written or changed from the version when it was first loaded. Thus, our HPS side must perform an unload operation over the bridge with the FPGA to return the updated cartridge image to the file sitting on the SD Card. In order to do an unload, we must turn off drive_ram by a command from the HPS. 

From a safety standpoint, the FPGA will also see the HeadLoaded signal so that it can stop handling reads or writes, thus it too can switch the drive_ram signal off. In order to keep the HPS and FPGA sides in sync, I have added commands to HPS which can ask the FPGA for the current value of drive_ram so that we can be sure both sides have it turned off when the disk is spun down. 

There is one pin on the HPS side, located in the Linear Technologies Corporation (LTC) header, that is labeled GPIO. However upon investigation of the schematics of the DE10-Nano board I see that this pin is used to switch that LTC connection between the SPI and the I2C serial links that are hosted on it. Thus, it would interfere with the link as its value changes. 

I did find one pushbutton on the board that is wired to the HPS side, named the HPS User Key. This button pulls down a line to ground when the button is activated. For my purposes, I can connect the HeadLoaded signal to the high side of the button. When the drive is not spinning with heads loaded, this will be at ground and the HPS will believe our pushbutton is depressed. When the drive is ready, it appears that the button is not depressed. The pushbutton is is therefore in parallel with our HeadLoaded signal, either can activate the line pulling it down to ground.

There is no external connector to tie to the HPS KEY wire shown above but I can tack on a wire to the pins of the microswitch in order to provide the link for the HeadLoaded signal to drive this line 

SCREEN AND OTHER UI ELEMENTS SHOULD ONLY CONNECT TO HPS SIDE

The lack of general purpose I/O pins on the HPS side requires me to find a means of connecting the user interface devices by one of the methods available there. I could use serial connections over the UART but that would require a separate terminal or computer running a terminal emulator. The LTC connector features I2C and SPI connectivity, although not simultaneously over its standard 2x7 connector. 

I found a 3.5" LCD display that operates over SPI, which is a good size for use inside the IBM 1130. A user will swing open a side door and see the control box for this Virtual 2315 Cartridge facility, select from among all the disk cartridge image files that are located on the SD Card hosted on the DE10-Nano board and see the state of load and unload operations for files that were selected. 

DIGGING THROUGH POORLY DOCUMENTED LCD MODULE

The documentation for this module, similar to all the hobbyist modules that come with great features at a low price from China, is very sparse. The module is intended for use with a Raspberry Pi Pico board, allowing that board to plug onto the back of the module. It offers the LCD, a touch interface, plus an SD Card reader and an on-board nonvolatile RAM to save parameters. 

It is advertised as an SPI interfaced module, which would mean three wires plus select lines for each device on the module. I would have expected seven wires, but there are additional lines present. There is no overall guide written for this, only a Wiki entry with some datasheets for the included chips on the module and for the LCD panel along with a few demo programs intended for use on Raspberry Pi Pico.

I do have schematics and the individual datasheets, from which I pieced together answers about all the signal lines I see wired to the Pico socket. One confusing item is that the SD Card can be accessed with either SPI or I2C protocols, based on a header pin setting on the module. In I2C mode there are six signals which I can safely ignore, both because I would use only SPI and more importantly because I don't intend to use the SD Card on this module. I will ignore the NVRAM as well. 

Thus, I only need the SPI link signals SCLK, MOSI, and MISO for the actual communications. One signal selects the LCD device and another selects the Touch interface atop the screen. That leaves just four more signals, power and ground that I have to account for.

One signal is used to turn the backlight of the LCD on and off. It will be hardwired on. One wire resets the LCD to a known state, which I will wire to my overall power-on-reset signal by routing it out of the FPGA side on an I/O pin where I will wire it to the LCD module reset. Two more signals are more interesting - Touch Interrupt which should be used to tell the HPS that we have a touch on the screen, an d LCD Data/Command which tells the LCD whether the SPI stream is a command or data for pixels. 

Alas, that means I need another general purpose IO pin from the HPS side in order to properly inform the LCD module of the type of SPI stream it will receive. I made use of HPS User Key as an input, but now I needed to find an available output I could drive. 

While there are devices on the HPS side we are not using, such as the UART or the G-force sensor, two issues complicate using them. First, they do not have external connections (the UART is converted to RS232 voltage levels through an FT232R chip). Second, these are controlled by Linux device drivers that I would have to modify in order to manipulate any of the signals. 

The one remaining signal, which is an output, that I could conceivable use is the HPS LED signal, a user controlled LED.  It does not have an external connector but I could desolder the 330 ohm resistor between the signal line and LED, then solder a wire onto the FPGA side of that resistor. I am not thrilled by having to tack solder two different wires to the DE10-Nano board but if it resolves a ton of extra work then I may have to accept this approach. 

Using the touch feature of the module would dictate that I connect the interrupt line from the module to the HPS system somehow and use it to alert my application that the user has made a touch. 

C libraries to control the LCD panel are available, originally used in Arduino and Raspberry Pi environments, that I should be able to modify slightly to run under the HPS Linux and manage my LCD module. 

RETHINKING 1130 TO SOC BOARD ELECTRICAL CONNECTIVITY DESIGN

The nature of the SLT logic gates to which my FPGA will interface is that they are connected through diodes to an in internal pullup inside the gate. Thus, I don't have to source any voltage or current to output a logical 1. Sinking voltage to ground through the diode is how we output a logical 0. Thus it will be best fed by an open collector gate which either pulls to ground or does nothing. 

My present interface board uses mosfet bidirectional level shifters, which would (uselessly) shift the FGPA output of 3.3 down to 3.0 (SLT level). In fact, an SLT gate won't care if it sees 3.3V on its input since it is isolated by the diode whose breakdown voltage is way way higher. 

SLT gate outputs have a collector pull up resistor to bring the output up near 3V when the output should be high and it sinks down to ground when the output is low. Thus the bidirectional level shifter is still useful here as the FPGA input needs to hit its minimum high voltage to sense a valid high state. 

Typical SLT NAND gate

NEW PCB NEEDED FOR INTERFACE BETWEEN 1130 AND BOARD

I will design a new interface board that will simply route my outputs to the SLT gates directly. Alternatively I could put in a transistor inverter in open collector mode, but I don't think that is necessary. The PCB will reserve the mounting pads for this so that if I experimentally determine the need I can adjust the existing board easily.

Fewer of the level shifters are needed, since I have only a few outputs from my FPGA to the SLT logic. My design also changed in that I did have some circuits running to a 5V Arduino which is no longer part of the project. This should be an easy change to make to the KiCAD files and get these out to the foundry in advance of needing them. 

Friday, March 10, 2023

Designing C code to write over the HPS to FPGA bridges

MEMORY MAPPING OF THE BRIDGE HARDWARE

The hardware on the Hard Processor System (HPS) side, including the bridges but also many other devices like ethernet controllers and SD Card controllers, are accessed by reading and writing to specific locations in memory. That is, the hardware is memory mapped. The overall address space for these ARM processor systems is 4GB, but ranges of addresses are carved out for the various peripherals. 

Our Cyclone V system on a chip  (SoC) provides four bridges between the HPS side and the Field Programmable Gate Array (FPGA) side. One of them is actually a bridge between the FPGA and the DRAM controller logic in the HPS system - the F2SDRAM bridge - which can directly access all of the SDRAM. The other three are the H2F, F2H and H2F LW (Lightweight) bridges.  

Only these last three are memory mapped into the processor address space since the first one only communicates with the DRAM controller. Looking at the three address ranges below, you see that the F2SDRAM has the rightmost map - addresses from 0 up to 4GB will directly access those same addresses in DRAM. Since this board has only 1GB of DRAM installed, the F2SDRAM bridge can only use addresses from 0 to 1GB.

Three addressing ranges

The middle address range is what the CPU sees, thus my programs (and Linux) have this view. The left address range is the one visible to the F2H bridge which we are not using in this project. The left and middle ones both have memory mapped the range of addresses from 3GB up to 4GB for access to the FPGA and peripherals.

The H2F bridge is accessed by touching addresses from the 3GB line xC0000000 up to the peripheral zone start at xFC000000. These are mapped down to addresses inside the FPGA, with the start of the area (xC000000 on the HPS side) appearing as x0000000 in the FPGA. Thus the H2F bridge has 960MB of addressible words that can be routed to various logic in the FPGA where the logic uses the address to determine if it is being selected. 

My logic in the FPGA implements a single four word buffer and ignores the address. Thus, any of the 960MB of addresses from x00000000 onwards will simply read or write this block of four words in the FPGA. From the HPS side, whether I write to xC0000000 or xE8000000 I am still reading or writing my four word block in the FPGA. 

The last 64MB of the address range is the peripheral area xFC000000 to xFFFFFFFF where all the peripherals and control registers are accessible by using addresses in this range. A 2MB slice of this peripheral region is assigned to the H2F LW bridge, the lightweight bridge. Akin to the other bridge, the address xFF200000 is the start of the H2FLW address range and these become addresses 0 to 2MB on the FPGA side. 

My logic in getcommands module treats any address in the 2MB range beginning at xFF200000 in the HPS side address range as the single command word. I could write to xFF2000000 or xFF200100 and it will be accepted by my getcommands module as a command or status word request. In the FPGA, I ignore the address, so whether it comes in as x00000000 or x00000100, using the hypothetical above, it is accepted to write the command word or read the status word. 

CONTROL REGISTERS FOR ACTIVATION OF EACH BRIDGE

The peripherals region implements 64 MB of various peripheral devices and control words. The configuration registers for the various bridges (H2F, F2H, H2FLW and F2SDRAM) are implemented at defined ranges. The H2F bridge control words begin at xFF500000 while the H2FLW control words start at xFF400000 and these are immediately following the end of the 2MB range of the H2FLW bridge.

Writing given values that set defined bits of defined words in the bridge configuration registers, called the Global Programmers View (GPV) in the documentation, to configure and activate that bridge. The F2SDRAM bridge does not have GPV registers nor need explicit activation, I believe, unlike the other three. 

ADDRESSING UNUSED FOR BOTH H2F AND H2FLW BRIDGES, ESSENTIAL FOR F2SDRAM

At the present time, with the two H2F bridges used for one purpose each, there is no need to discriminate the address for a read or write as we know it is always the one and only word or block of four words. The Avalon Memory Mapped protocol does provide an address but I was free to ignore it. 

The F2SDRAM interface, on the other hand, could access any word of the entire 1GB DRAM. Since almost all of that is in use from Linux, I have to carefully address each transaction to point only at the top 1MB which I had reserved for my use. The logic in loopcart and in diskmodel will produce addresses only in that last megabyte range. 

C CODE THAT WRITES COMMANDS AND FETCHES STATUS WORDS

The basics of accessing the bridge from Linux is to point at the proper address in the range that is associated with the give bridge or GPV register. To do this, we have to get access to the memory range which is done by using a special file, /dev/mem and then memory mapping it with a Linux system call. We start at the predefined addresses - xC0000000 for the H2F bridge where we will read/write the blocks of four words to the FPGA, and xFF20000 for the H2FLW bridge where we will write the command word and read the status word from FPGA. The GPV registers for those bridges begin at xFF400000 and xFF500000.

The resulting addresses for the H2F data, the H2FLW data and the GPV control registers are pointers that we use to load and store from the C program. Assuming we have the two bridges configured and operational, then in order to send a command that begins a load operation using loopcart, we set the last four bits of a word to the code b0001 that represents the load command. 

That word is then stored through a pointer to location xFF200000 and automagically the bridge hardware issues a write on the H2FLW bridge. My logic in the FPGA then triggers loopcart which begins to receive blocks of four words from H2F and writes them to the SDRAM. Checking through the pointer generates a read transaction where my logic sends the status word to the HPS side. 

C CODE TO PERFORM A LOAD OPERATION

Once the command word is issued to the FPGA to begin a load operation, the FPGA side is waiting for writes from us on the H2F bridge. We will open a virtual 2315 cartridge image file, memory map it, and then transfer blocks of four words through another pointer (to xC0000000) which forces a write transaction. Our code must loop through the entire cartridge file, four words at a time, for a total of 130,326 times to load the entire cartridge file into the last megabyte of SDRAM. 



Simulation testing of the new logic I constructed

ULTRA SIMPLE REDIRECTOR MODULE TESTED FIRST

My module getsdram is the interface to the F2SDRAM bridge that allows the FPGA, as the controlling party, to read or write SDRAM over on the Hard Processor System (HPS) side of the board. I have reserved the last 1MB of the 1GB of installed SDRAM, so that Linux does not touch it. The bridge makes use of that 1MB to hold the current virtual cartridge image since a 2315 disk cartridge is just under 1MB in capacity. 

When the IBM 1130 disk drive is spinning the dummy physical cartridge and goes into ready state, my application running on the HPS side will send a command to switch the Field Programmable Gate Array (FPGA) side over to active mode, setting on the drive_ram signal. This signal activates the disk modeling logic in the FPGA which will send streams of bits to the disk head electronics in the drive that correspond to what would have been read if the heads had lowered onto the physical cartridge.

By monitoring the sector mark signals from the disk drive my logic knows exactly where the head would be riding atop the circular track on the disk cartridge. By monitoring the disk arm movement interactions from the disk controller, my logic knows which cylinder is under the head. Thus my logic knows the cylinder, head, sector, word and bit within a word that will be beneath the head.

My diskmodel logic emits the proper stream of clock and data bit pulses into the disk drive head electronics as they would occur at this moment if we were reading a physical cartridge with the heads down and loaded. Because my modification stops the heads from loading onto the surface, the only source of pulses seen by the head electronics is the injection from my FPGA logic.

The diskmodel logic will call on getsdram to fetch each word of data for the current sector and feed that through the head electronics to the disk controller. The logic also generates all the preamble, sync word and check bits that the disk controller expects in order to successfully read from a sector. 

When the disk controller is writing to the disk, I capture the stream of clock and data pulses sent by the controller, assembling them into bits and words in that sector. I then update the SDRAM in the appropriate spot to save the word that was just written from the IBM 1130 CPU. 

At a later point, when the physical disk drive is switched off, the contents of the SDRAM upper 1MB may contain some data written from the IBM 1130. That must be transferred back to the source of the virtual 2315 cartridge image. When we start up, before we turn on the physical drive, an image of a disk cartridge must be selected and loaded into the end of the SDRAM.

These operation, unload and load, are the responsibility of my loopcart module. The virtual disk cartridge images are files on an SD Card that is attached to the HPS side of the chip. This facility provides capacity for hundreds of virtual 2315 disk cartridges on the SD Card and allows any one to be selected to be virtually loaded into the IBM 1130 disk drive.

Through interaction with the user interface, the operator first selects which virtual cartridge is to be loaded for use. My application in Linux on the HPS side will read that file and memory map it inside the running Linux image. It then makes use of the H2F bridge, as the controlling side, to write the cartridge to the FPGA side.

Loopcart interacts with the getdata module that receives the reads and writes of the H2F bridge. It will loop through the entire virtual 2315 cartridge image, four words at a time since that is the width of the H2F bridge, and ask getsdram to write them to the last MB of SDRAM. In order for loopcart to control the F2SDRAM bridge via getsdram, it must have control of that bridge. When drive_ram is off, loopcart has that control and can access the SDRAM. 

As the IBM 1130 disk drive goes ready, my application sees that and sends commands on the H2F LW bridge to switch over, first asking loopcart to load the SDRAM as described above and then turning on drive_ram. When the physical disk drive shuts off, it is observed by my Linux application on the HPS side, which turns off drive_ram and then unloads the SDRAM contents back to Linux.

The unload operation is the mirror of loading. Loopcart will read each block of four words from SDRAM and then wait until the H2F bridge via getdata does a read to grab that content. We hold the HPS side from completing its read by controlling a wait request signal on the H2F bridge, thus ensuring we have completed fetching the block of words before presenting them to the read command from the H2F bridge. 

This very simple module routes the control signals to getsdram so that, depending on the state of drive_ram, either the loopcart or the diskmodel modules drive getsdram. It is not even clocked, just simple multiplexing of the signals.

Testing under simulation involves having various processes asynchronously toggling control signals, addresses and data while the master process has drive_ram off for a while and then turns it on. This was very simple logic for a testbench and I could immediately verify that the proper addresses, data, control signal and response signals were occurring based on the state of drive_ram

This tested out perfectly and I then moved to the barely more complicated module below.

COMMAND AND STATUS HANDLING MODULE TESTED NEXT

The existing module getcommands will continually latch in and register the value of any word written to us over the H2F LW bridge from the HPS side. This will be a four bit command that drives actions such as switching the drive_ram signal, starting load of a virtual cartridge, triggering unload of a cartridge, and a null action value. In addition, whenever the master side on HPS does a read from any address of this bridge it will be sent a status value that it uses to verify successful recognition of a command, successful completion of the operation, whether an error was detected, and an idle good status. 

My bit of new logic simply monitors the registered Command word and advances through states to implement the commands. Part of its operation is emitting the proper Status value to be interrogated by HPS. This will show the Linux program that its command is underway, in the case of a load or unload, then demonstrate successful completion or error condition after loopcart is through.

My testbench simulated various write and read transactions as they would have been driven from the HPS side over the H2F LW bridge, using getcommands, and ensuring that I properly emitted control signals based on the various commands. The status returned appeared correct as well. 

LASHING IT TOGETHER FOR FIRST HARDWARE TESTING

I will pull these simulated and validated modules into the top level reference design file, which is in Verilog rather than my preferred VHDL, so that I can fire up the board and test the operation of the commands on real hardware. 

Sunday, March 5, 2023

Building up the next level of logic in order to begin tests on the physical hardware

SCAFFOLDING TO INCLUDE LOOPCART AND THE BRIDGE MODULES

The getdata, getsdram, getcommands and loopcart modules will be instantiated in the high level module that contains the connections to the actual System on a Chip (SoC) hardware. This ties them to the signals coming from and going to the various bridges. Now that this block of logic appears correct from simulation, it must be connected to the SoC for proof on real hardware. 

After instantiation, when loopcart is activated it will drive the bridges to accomplish the movement of the nearly 1MB of virtual 2315 cartridge information between the running Linux image in the Hard Process System (HPS) side and the SDRAM whose last megabyte was reserved for access from the Field Programmable Gate Array (FPGA) side. 

ADD LOGIC TO RECEIVE COMMANDS AND SEND STATUS TO LINUX

The HPS to FPGA Lightweight Bridge (H2F LW) is controlled by the Linux side and communicates with logic in the FPGA. I will accept a command word written from Linux, latch it in, and make use of it to control my logic. Most importantly, it delivers words that trigger loopcart to run in load or unload modes to transfer the virtual cartridge data. 

I also provide a status word that is delivered then the H2F LW bridge does a read, giving Linux the current status of any commands it has issued to the FPGA side. The command word will also drive the switching of control for SDRAM access between two major functions, loopcart and diskmodel. 

When the disk drive is running, our diskmodel module will read and write words as they are streaming to/from the disk drive electronics. When the Linux user is selecting the appropriate virtual cartridge to load into a drive which is currently switched off, loopcart will be controlling SDRAM. 

ADD REDIRECTOR FOR SDRAM ACCESS BETWEEN DISKMODEL AND LOOPCART

The signals that drive the getsdram module must have only one source to avoid conflict. Thus, a redirector will be set to have these signals driven by either loopcart or diskmodel, based on the commands we received over the H2F LW bridge. This is a very simple bit of logic which should not change as we move forward to the production version of this virtual disk cartridge facility.

START ON LINUX APPLICATION TO RUN IN THE HPS

I will need to write some user code running under Linux to work with the FPGA side over the bridges. First, it must activate the three bridges H2F LW, FPGA to SDRAM (F2SDRAM) and HPS to FPGA (H2F) so that they can be used. These are memory mapped interfaces, thus I need only store or access particular locations in the Linux memory space to use the bridges.

Code is needed to send the commands over H2F LW to switch SDRAM access, to drive a load or unload, and to check status. It must handle the HPS side of a cartridge load or unload. This involves looping through a memory mapped disk file, the virtual 2315 cartridge image, and transferring blocks of four words iteratively over the H2F bridge to the FPGA. Each such operation involves 130, 326 blocks of words to transfer the nearly 1MB cartridge image. 

Eventually this code will be integrated into a full user interface which permits the user to select virtual cartridge files that are sitting on the SD Card. As the disk drive is switched on and off, this code will drive the load and unload to put the cartridge image in SDRAM, and switch SDRAM access based on the state of the physical disk drive. 

Initially, I will set up a dummy file and have skeleton code that simply fires off a load, loops to send the dummy file, then fires off an unload and compares what comes back to the dummy file. This will prove out the transfer mechanism without having to code and debug the rest of the user interface.

Saturday, March 4, 2023

Simulation of the load/unload logic completed

TESTBENCH MECHANISMS TO PROVE OUT THE ENTIRE LOAD/UNLOAD PROCESS

Having written the loopcart module, I put it into a testbench to simulate the operation of entire load and unload processes. Loopcart will start with the low address of the last megabyte of SDRAM and step through the space as we read or write blocks of four 1130 words per cycle, 130,326 times. 

For load, it will wait for the Linux application in the Hard Processor System (HPS) side to write the first block of four words over the H2F bridge, then write that block to the SDRAM over the F2SDRAM bridge. We are in the FPGA side and thus are the controlled party in the H2F bridge, with the timing of the write or read over that bridge totally decided by the HPS with no voice from the FPGA. The F2SDRAM bridge is controlled by us as the master side, thus we determine the timing for those read or write operations.

Loopcart will make use of the waitrequest signal of the H2F bridge, a control that the subservient side (us) can assert that tells the master side to hold off on its write or read. We keep that signal asserted, opening a window when we want the read or write from the HPS to actually take place. 

My testbench has to model all the signals of the H2F and F2SDRAM bridges which come from the HPS side, in addition to kicking off the major operation with a trigger to loopcart to perform the unload or load. I wrote the testbench signal timing to test out all the relative timing cases:

  • H2F attempts write or read before loopcart is ready
  • H2F starts the read or write exactly as loopcart is ready
  • loopcart has to wait for a delayed write or read from H2F
  • SDRAM read and write take place in one cycle
  • SDRAM read and write are delayed
  • SDRAM write and read are at the exact time loopcart is at the next state

RESULTS OF THE SIMULATION

I uncovered a few aspects of the logic that warranted making more tolerant of timing variations, leading to a potentially more reliable design. I spent about as much time fixing or refining the testbench as I did working on the logic of the design. 

Eventually when I had separated out the testbench driver for the F2SDRAM link from the driver for the H2F link, allowing them to be truly asynchronous of each other, I achieved the timing relationships to shake out the logic. 

I was able to see it cycle through the entire 1MB of SDRAM, although I only need 99.43% of that to hold the entire cartridge. Realizing that, I adjusted the loop end so that I stopped after the 130, 326 iterations and not the 131,072 that would fit into one megabyte. 

Wednesday, March 1, 2023

Simulating getdata and getcommands modules complete

GETDATA  AND GETCOMMANDS SIMULATION

I completed simulation and verification of these two modules, using a testbench that toggled signals as the H2F and H2F LW bridges are expected to operate. The signals looked clean and my error state detection operated as I expected. 

MODULE DEVELOPMENT FOR LOGIC TO DRIVE THE LOAD AND UNLOAD PROCESS

This next higher level module will be called when the getcommands bridge receives a command from the Linux side requesting a load or unload. It will then loop to move an entire virtual 2315 cartridge image between the Linux side and the dedicated 1MB of SDRAM we control from our FPGA side. 

For load, it will wait for the Linux side to send each group of four words from the file selected by the user, then write that group out to the SDRAM in its assigned spot. The Unload operation will fetch each group of four words from SDRAM and then wait for the Linux side to read that group in order to update the file image on the SD Card. 

As I polish up this module's logic, I simulate it with a testbench to watch it appropriate drive the various bridge communications between FPGA and HPS sides of the chip. Rather than just simulating the requests to the various modules I have already tested, I chose to link them together to ensure this works properly all the way down to the Avalon MM interfaces of each bridge.