Monday, February 1, 2016

Virtual 2501 card reader functionality tested and working well, building out 1442 implementation

SAC INTERFACE FOR ADDING PERIPHERALS TO THE 1130

Testing virtual 2501 card reader

I fired up the system and tested the 2501 card reader function by toggling in some hand code to issue the XIO Init Read to do reading, an interrupt handler that grabbed the DSW along with a reset, paused, then went back to the mainline. My mainline is the XIO read plus a pause and loopback, letting me examine the buffer contents after each read.

I used a few card decks from the 1130 simulator as the data to read and check. When I first fired up my hand program, I saw it issue the XIO but there was no sign that my fpga nor the Python program was active processing it. I looked over the fpga first to be sure it was capturing the XIO IR and the Python program, but also stuck in some diagnostic LEDs and messages to help. I then discovered that I had forgotten to click the checkbox to activate the virtual 2501 support. I went back out to test again and found a couple of minor flaws in the Python code that were introduced because I collapsed several transactions (last XIO function, WCA address) into one.

I discovered that my unicode oriented read of 029 format files (IBM 1130 ascii files of card images) fails on Ubuntu Linux but works on Windows. I will need to work on this but initially I will just used native 1130 simulator (Binary) format files.

My testing worked great for the file I entered, until I ran into the end of the deck when the card reader should go 'not ready' unless the 'last card' checkbox is on. My primitive hand code does not check for not ready, but just ssues another XIO Init Read even though the file is done, My python code doesn't watch for this and attempts to read the next image from the closed file, causing an error. I can protect against this in the Python code.

A real reader acts the same way when the hopper empties. It signals not ready, then if the operator pushes the start button with no new cards in the hopper, it goes ready in last card mode, so that the next XIO Init Read will read the final card and then the operation complete interrupt also has the 'last card' bit set in the DSW to tell the software that this is end of file rather than a temporary case where more cards have to be loaded in the hopper.

I will fix up the Python code then step through the card images in my hand code on the 1130, seeing that it behaves properly for both the last card sequence and the 'out of cards' not ready case. If this is fixed, the only issue is converting 029 format unicode files to the native binary format as I open such files.

I still had a flaw in how I deal with my incorrect code that is issuing XIO Init Read on the virtual 2501 when it is not ready. The solution is in the FPGA, where I won't record the last XIO Function Code if either Not Ready or Busy is on in the DSW. The Python code won't see any XIO in that case, until it loads a new file and sets the DSW back to ready (turning off Not Ready bit).

The code I wrote to handle the Unicode ASCII (needed for two special characters on card images that are not part of the 8 bit ASCII character set) used the windows generic encoding 'mbcs' but Unix and Linux have no default. I changed the code to explicitly request 'UTF-8'. Testing showed this worked perfectly.

I ran multiple decks through the virtual card reader, including a mix of binary and Unicode ASCII, with everything working as expected. The last card image of each file will be left 'in the reader' with the reader not ready, unless the 'last card' checkbox was selected which allows the last card to also be read. The DSW showed last card or not ready exactly as would occur with a real 2501.

At a later time, I will test the boot operation on the virtual 2501. This will read one card image in the special program load mode, stuck the 80 columns in memory starting at location 0, then trigger my pseudo-program-load sequence.

Building virtual 1442 functionality

I jumped into modifying and building the logic for the 1442 reader/punch since a few aspects of that design caught my attention. The PC side is pretty clean, while the FPGA side is a bit more complex mainly because of looping FSMs emitting column by column interrupts and the timing modeling for feeding and other operations.

The PC will watch for XIO instructions and respond to XIO Control and XIO Init Read (a fake operation that the programmer should never execute for a 1442). Any of the XIO controls will cause a feed cycle, which involves pushing down one card image from the PC file. I decided that users must open a file of 'blank cards' if they are punching, otherwise there are data cards in the file.

The FPGA will see the card pushed, start emitting column interrupts at the proper pace, handle XIO Read or XIO Write within the fpga by feeding or updating the two card buffers, and claim to have seen an XIO Init Read when the last column of read or punch is processed and the elapsed time matches the performance of a 1442 model 7.

The fpga will hold off on the operation complete interrupt (IL4) until it sees that the PC has fetched the last of the punch buffer, then it triggers the interrupt and clears it after the XIO Sense DSW with Reset bit is processed.

The PC program sees the XIO Init Read code and fetches the punch buffer contents. If an output file is open on the PC, it writes that image to the end of the file. In either case it goes back to polling for the next XIO Control operation.

8 comments:

  1. Carl, I'd like to see EBCDIC card reading as well as ASCII. This should be simpler, and the 1130 library already provides the table. This might be useful, e.g. for using an emulated 1130 as an RJE system to aan emulated 360.

    ReplyDelete
  2. The 1130 doesn't know anything about ASCII. The cards are in Hollerith and that is how card data is handled in the 1130. IBM established Hollerith codes for EBCDIC and they are automatically supported by the 1130 for reading and punching cards.

    The 029 keypunch only directly supports a small subset of EBCDIC as keycaps and printed glyphs. The 029 keypunch and 1130 were launched at essentially the same time as S/360 and EBCDIC. The prior generation for systems like 1401 and 70x0 was BCD and the 026 keypunch represented a subset (a large subset given the more limited range of characters in BCD) on its keycaps and printed glyphs.

    The various printers have their own set of glyphs, depending on the print chains or character wheels or type balls used, but these are typically offering 48 to perhaps 100 characters depending on the device.

    The 1130 has tables and conversion routines to map certain Hollerith card characters to certain 1132, 1403 or 1053 characters, but Hollerith always offers a wider set of characters so there are choices to be made in what Hollerith is converted to what printer code, etc.

    While IBM stuck an 'ASCII' bit in the PSW of S/360, it was almost never used and those systems ran with EBCDIC. The 1130 table you mentioned must be a way to convert a pair of Hollerith characters into one word that has two bytes of EBCDIC. It would be a lossy conversion but as a practical matter EDCDIC encodes 256 character values which is more than almost anyone would use.

    Since Hollerith is a combination of 12 bits (rows 12, 11, and digits 0 to 9) but the 1130 has 16 bit words, cards read in Hollerith are technically encoded as a 1 to 1 mapping of the card rows 12-11-0...9 to bits 0...11 and has four constant bits of 0 at the end of the word.

    Binary cards are various encodings of binary bits to card rows, but are not Hollerith characters which is why one does not turn on the printer on a keypunch when duplicating such a card - there is no glyph to assign except accidentally.

    The most binary card common encoding uses four sequential columns of a card to hold three sequential binary words. Bits 0-11 of the word 1 in column 1, column 2 has bits 12-15 of word 1 and bits 0 to 7 of word 2, and so forth.

    The IBM 1130 simulator adopted a file convention that closely matches the in-memory storage of Hollerith - 12 bits mapped to the 12 card rows plus four zero bits. This is called 'binary' mode but really it is the native way the 1130 reads and punches cards, regardless of the encoding being used.

    If you read source statements for a FORTRAN program, each card column is an EBCDIC character encoded in Hollerith, the keypunch would print the glyph associated with the character at the top of the card column, and it would sit in the 1130 memory word as Hollerith. The simulator 'binary' file would be the same Hollerith as you would see in the 1130 memory word.

    If you were to read an 1130 card encoded binary card, such as the 4:3 format I mentioned before, then some conversion routine would act on the Hollerith to convert four words of Hollerith to three binary words of 16 bits in the computer, but it is first stored as 80 columns of Hollerith. Thus, 'binary' files in the simulator are actually loss-free native Hollerith mode.

    As a convenience for PC users to read and create files to use with the 1130, Brian Knittel created ASCII file formats which he would convert into Hollerith as the user read or wrote to the files. ASCII has a smaller number of characters than EBCDIC and most seriously, it is not the same set of characters - some ASCII characters don't exist in EBCDIC and more often, some common character used for EBCDIC computers (the logical not character as an example) have no counterpart in ASCII.

    ReplyDelete
  3. Brian tried to map the keycaps and printed glyphs of an 029 keypunch to ASCII. Thus, when a user punched a keycap for *, it was punched as 11-4-8 on a card. This is 0100001000100000 in an 1130 word (11 4 and 8 rows are 1, the rest are 0). It is the same in a simulator 'binary' card. If it were converted to EBCDIC to pack two card columns per 1130 word, it would be 01011100 (x5C). Brian converted it to ASCII which has the character * as 0101010 (x2A). That ASCII code x2A is associated with the glyph * so reading that with Wordpad or other editors on the PC you would see *.

    Consider the logical NOT character (the glyph is a horizontal line with a downward right angle extension on the right side). It is encoded in Hollerith as 11-6-8, in 1130 and simulator binary as 0100000010100000 and in EBCDIC as x5F. There is no such glyph in ASCII. Brian chose to store the 029 format files in Unicode format (more than one byte needed for some characters), not in ASCII, but Unicode has ASCII as a subset encoded in the first 127 values. Thus the glyph for logical not is a Unicode representation that shows up if you use a PC with Unicode encoding support, something like UTF-8.

    What about the cent sign? No ASCII character exists for that, but it is an 029 keycap, a printable glyph and used often with EBCDIC systems.

    A mapping between Hollerith and ASCII is very lossy, not standardized, and can be imprecise. I can use the multipunch key on an 029 keypunch to punch 12-0-1 which is a lower case letter 'a' in EBCDIC, in Hollerith and would be stored as 1011000000000000 in the 1130 and simulator binary. It is an x81 in EBCDIC and x61 in ASCII, but does not exist as an 029 keycap or glyph. When I duplicate that lower case 'a', it does not print 'a' at the top of the column because the 029 does not have such a glyph, it is an unprintable character.

    The 1403 and 1132 don't have a lower case glyph so it would print as a blank, but the 1053 does support both upper and lower case. With the right typeball, it would type the glyph 'a' but ironically, the typeball for the 1130 has the same glyph 'A' that prints for either 'a' or 'A' Hollerith code since they just duplicate the upper case character.

    Brian's 029 translation does not do anything with the lower case 'a' I punched as a 12-0-1, but it would be properly read and processed on the 1130 if read in 'binary' (1130 native) mode. It would become a space or blank using the 029 format ASCII.

    As a convenience, my Python program for the virtual card reader/punch devices accepts the information-losing 029 format files but converts them to the internal 1130 Hollerith format as best it can before actually passing it down to the FPGA and then the 1130. I never read ASCII into the machine.

    Sorry for the very long winded explanation but this is a subtle and complex area - encodings, meanings, characters, glyphs are all different things.

    ReplyDelete
  4. Ah, takes me back! Back to my Green Card days. Here is a high-res PDF of a Green Card (http://archive.computerhistory.org/resources/access/text/2010/05/102678081-05-01-acc.pdf). Scroll down a bit, there are the 256 EBCDIC values with their punch codes and glyphs.

    12 bits/punches gives 4096 possible values, and if you mean to dump or load a core image, any of them could appear. But the "Punched Card Code" column of the green card has at most 6 punches per character. Is that the punch code you are calling "Hollerith"?

    As you note, EBCDIC has some characters not in ASCII (¬, ¢) and BCDIC even more (like ¤‡∆√⧻and what the heck is that telephone pole at BCDIC 4F?). But all are in Unicode (except BCDIC 4F, or I can't find it). So working in Unicode just sweeps away all the compromises and gimmicks and hacks people have done, trying to fold EBCDIC into ASCII without losing information.

    As a practical matter I hope you are working in Python 3, where all strings are Unicode by default, and there is a well-defined way to convert the string type to the binary-array type and back again. It's a mess trying to do that in Python 2.7.

    ReplyDelete
  5. Isn't the telephone pole a group mark, or word mark? I spent a lot of time mapping character sets in a past life. Actually I think there is an AS II character for NOT, but it's in the upper 128 characters, not 7-bit ASCII. No cent sign except unicode IIRC.

    ReplyDelete
  6. The last remaining BCD glyph that had been missing from Unicode was adopted just in the last year after one of our restoration team wrote a proposal to the standards body and had it accepted. Now supports word mark, group mark, tape mark and all the other pre-360 characters used by IBM.

    ReplyDelete
  7. Great! I was looking for some BCD characters a while ago. Does that include plus-zero and minus-zero?

    ReplyDelete
  8. The IBM 1401 glyphs for plus zero and minus zero are ? and ! (yes, question mark and exclamation point). On the 1407 console typeballs, they used the glyphs & and - instead.

    ReplyDelete