At VCFMW last month, my table was adjacent to Lorraine and her friends.
Ben Heck walked by during setup, and asked me what it was. I was clueless, so we started making educated guesses. The Amiga poster was a start.
I do wire-wrap. This thing is a marvel to behold. It is quite orderly, but could have used colors more effectively.
The three units implement the VLSI chips and the main board of the Amiga that was first shown at CES (I believe.)
Each VLSI is a stack of PCB such as you might get from Vector, with columns of pads for ICs in wire-wrap sockets, buss bars, and edge areas having mounting holes for connectors. The layers are connected by ribbon cables.
(they are not called breadboard!)
Wire wrap is a superior technology. There are no cold solder joints. They are gas-tight.
It is not hard to debug. If you follow some rules, and don't make a spaghetti bird's nest.
Such workmanship can be seen on minicomputers of the early 1970s.
Whole computers were made by wire-wrap around MSI chips. My wire-wrapped PDP-11/10 functioned perfectly thru the 1990s.
Recently, I implemented a microcomputer design in wire-wrap. That was enjoyable!
My design was captured in KiCad, laid out as a PCB, which I translated to perf-board and wire-wrap sockets
This approach is perfect for prototyping, as you can simply add new blocks.
I probably still have my wire-wrap gun in storage somewhere but I would much rather use an FPGA. If you don't mind, answering, why do you still use this tech instead of FPGAs?
As for cold solder joints: no, they don't do that. But when you start making modifications you have to be extremely careful not to cause any damage because tracing a loose joint on a wire wrap board is the stuff of nightmares.
Forbodingly, the article signs off with "Amiga, please don't join the sorrowful ranks that have wasted technological superiority through marketing muck-ups."
What a godawful mess that must have been to debug. I've never used wirewrap, it looks awful to me.
I am trying to imagine what it would have been like to design such a system using only pencil and paper. Going from block diagram to the lowest level, just on big sheets of paper... the pencil sharpeners must have been emptied twice a day.
I made wirewrap boards back in the early 90s and it was extremely tedious, both following the netlist instructions to do the initial work, and then the inevitable debugging when it didn't work. Of course you can have two kinds of bugs - either you made a mistake doing the wirewrap, or your design has a problem! For the complexity of the boards I did, I think it took 2-3 days to do the initial wirewrap, and then days or weeks to debug and get it working.
It was, however, both cheaper and easier than doing a prototype PCB. For that, I'd have to use the institution's darkroom with their flatbed photoplotter connected to a PDP-something that you had to boot from reel-to-reel tape. The plot happened overnight, and then had to be developed next day in the darkroom, and then if you were lucky you'd have transparencies of each layer of the PCB that you could send off to a local company who would etch you a single PCB for a lot of money in a few weeks. Even that wasn't trouble-free, since PCBs can have manufacturing faults, or you could screw up when soldering the components to the board, or your design could be wrong.
It was very rare that I as the most junior person was allowed to go the PCB route. I think for my boards it happened only once on an ECL design that simply wouldn't have been possible with wirewrap. Although I was tasked with doing the transparencies for other team members. Since I was being paid only £40/week through a government benefits scheme, it was much cheaper to pay for my time than to pay an external company.
Also as the other reply says, I used CPLDs a lot which were much faster to iterate. With practice you could pull out the QFP package, put it in the programmer, recompile and upload the new logic, and put it back into the board in an hour. Luxury!
We never used pencil and paper (except for notes). The software for drawing schematics, laying out PCBs, making netlists, and compiling CPLDs was pretty advanced even then. Although all of it was horribly proprietary. No KiCAD for you.
Taught you to check everything in your design early and often.
The existence of wire wrap tells you a lot how painfully tedious it was to layout PCB's at the time. I did a couple of wire wrap boards. But eventually just started soldering wire wrap wire to the sockets. By the early 90's it was faster to layout a PCB and have it fabbed. Bonus you could outsource that and use the 3-4 weeks to do less tedious things.
This is just around the time that programmable logic became readily available. It'd be much easier to iterate with that than wiring up logic gates. Last 30 years you can do all this debugging with simulations and then test using FPGA's.
I'm old-young enough to be aware of the evolution of minicomputers implemented in MSI TTL with wire-wrap (1970) to VLSI integration (1975). Examples are the LSI-11 and TMS9900.
My first home-brew micro was done in 1987 using the Radio Shack hand-tool and an OK Industries' motorized wrap gun.
Now, I add CPLD to my wire-wrap designs! Just like on an iterated PCB, you must lock the physical pins to functions.
Never seen wire wrapped boards besides photos of this and maybe some other early micro. So of course I had to do a little search and one of the first results has Bil Herd from Commodore (Plus/4, C128...) explaining it.
When I was a kid I had a Dragon32 and my little brother had an Amiga 500. I thought it was so cool with the demos and the sound but he was always getting worms that spread via floppy disc.
there was a distinct floppy sound when the filesystem was updated after a write.
i noticed my first virus pretty quickly, and even though i couldn't remove it, i could disable it in some files that i couldn't reproduce (no internet back then, and i was on the wrong side of the iron curtain as a child)
The failure of the Amiga and the near-failure and resurrection of Apple is what makes me believe in parallel universes/alternate timelines more than anything :)
If you really take seriously what you could have done for a home computer if you had started with fully integrated chips, its actually insane.
Imagine if you had an Amiga Chipset and you had combined it with a RISC like chip. If you did that in late 70s with 3.5μm HMOS (like 68k). The resulting system would be insane, in terms of performance to cost. You could outperform minicomputers that cost 10-100x more.
The ARM2 like chip and the complete Amiga chipset seem to have less transistors then a single 68k, so the price off such a system would be very low. And we can see that with the Amiga, what really blows my mind is how cheap Amiga ended up being, an unbelievable achievment.
Its seem the issue really was the the companies that had the resources to do that amount of chip design knowlage and finances were not interest in making a home computers/workstation. Workstation ended up being made by startups who didn't have the resources to do so much costume work. Appollo was a split-off group DEC because DEC was not interested in workstations. IBM was just to slow and couldn't really do prodcut design, and we all know how the eventually got around that problem with the PC. Apple for the Mac did try to do one ambitious chip with VSLI but didn't end up using it.
The split between computer companies and chip design company was just to big to get the needed amount of integration, and there was clearly a lacking vision for what home computer could be. Jobs vision for the Macintosh went in the right direction, but really Jay Mine had the right vision, and he had it because he build a computer for himself. He wanted a home comptuer that was fast, had a proper operating system and enough media capability to run a flight simulator software. Sadly manamgent most of the time wanted him to develop a console and later when they allowed a home computer they didn't share his full vision.
But then also actually plulling this vision off, multi-chip costume design with very few resources is just an amazing achievment. And many of the people didn't even have that much knowlage in chip design, there was a lot of competition for chip design people. Getting into Commmodore where they had the actual semiconductor teams to get these designs over the line was lucky, many other companies who could have bought them might have messed this up.
In a perfect world you add ARM2-like RISC chip, a Sun-like costume MMU to something like Amiga Chipset and you move computing forward by 10+ years. In reality the exact opposite won, a 16-bit PC that had basically no costume design in it what so ever.
For 1985, I'd think that you'd also have to imagine that DRAM was cheaper than what it really was.
RISC and especially MMU with paging increase memory requirements. For comparison, the first Amiga was designed for 128K, got 256K. Linux/M68K with MMU on the Amiga required 4MB to be usable.
I wonder how much faster the ARM2 would have been compared to the 68k in a first-generation Amiga. The Amiga's chip memory only delivered 7 MBytes/s, shared between the CPU and the chipset! With its 32-bit instruction words, the ARM2 would have been very far from its theoretical performance.
ARM2: from 6 to 10 million instructions per second, depending on instruction mix
68000: 1.4 MIPS typical.
(For comparison: Intel 8086 at the same speed, something like 300 Whetstones, 0.5 MIPS. So either of them stomped all over a comparable x86 machine from that time.)
So, very roughly, ARM2 was between 2-3x faster in typical use.
Note:
. Neither CPU could do FP in hardware.
. Neither had cache memory.
. The Amiga had a lot of complex hardware acceleration for graphics; the original ARM2 machines from Acorn (Archimedes A2305, A310, A400) had essentially none.
So, Amiga games could do things that on the Arc required raw CPU, typically done careful hand-coded assembler.
Well - I used Archimedes computers with ARM2 and owned an Amiga 500+ and honestly, I couldn't tell you the Arcie was faster. It certainly didn't have the custom chips, so it is probably not a fair comparison.
At VCFMW last month, my table was adjacent to Lorraine and her friends.
Ben Heck walked by during setup, and asked me what it was. I was clueless, so we started making educated guesses. The Amiga poster was a start.
I do wire-wrap. This thing is a marvel to behold. It is quite orderly, but could have used colors more effectively.
The three units implement the VLSI chips and the main board of the Amiga that was first shown at CES (I believe.)
Each VLSI is a stack of PCB such as you might get from Vector, with columns of pads for ICs in wire-wrap sockets, buss bars, and edge areas having mounting holes for connectors. The layers are connected by ribbon cables.
(they are not called breadboard!)
Wire wrap is a superior technology. There are no cold solder joints. They are gas-tight.
It is not hard to debug. If you follow some rules, and don't make a spaghetti bird's nest.
Such workmanship can be seen on minicomputers of the early 1970s.
Whole computers were made by wire-wrap around MSI chips. My wire-wrapped PDP-11/10 functioned perfectly thru the 1990s.
Recently, I implemented a microcomputer design in wire-wrap. That was enjoyable!
My design was captured in KiCad, laid out as a PCB, which I translated to perf-board and wire-wrap sockets
This approach is perfect for prototyping, as you can simply add new blocks.
I probably still have my wire-wrap gun in storage somewhere but I would much rather use an FPGA. If you don't mind, answering, why do you still use this tech instead of FPGAs?
As for cold solder joints: no, they don't do that. But when you start making modifications you have to be extremely careful not to cause any damage because tracing a loose joint on a wire wrap board is the stuff of nightmares.
The display was in a booth replicating the 1984 Winter CES booth. Here is a Creative Computing article from the 1984 event.
https://www.atarimagazines.com/creative/v10n4/150_Amiga_Lorr...
Forbodingly, the article signs off with "Amiga, please don't join the sorrowful ranks that have wasted technological superiority through marketing muck-ups."
Well, lucky Amiga, it was wasted by executive muck-ups rather than mere marketing foibles.
The marketing managed to miss most of the time, too. (Mostly because of exec-level misguidance, though.)
What a godawful mess that must have been to debug. I've never used wirewrap, it looks awful to me.
I am trying to imagine what it would have been like to design such a system using only pencil and paper. Going from block diagram to the lowest level, just on big sheets of paper... the pencil sharpeners must have been emptied twice a day.
I made wirewrap boards back in the early 90s and it was extremely tedious, both following the netlist instructions to do the initial work, and then the inevitable debugging when it didn't work. Of course you can have two kinds of bugs - either you made a mistake doing the wirewrap, or your design has a problem! For the complexity of the boards I did, I think it took 2-3 days to do the initial wirewrap, and then days or weeks to debug and get it working.
It was, however, both cheaper and easier than doing a prototype PCB. For that, I'd have to use the institution's darkroom with their flatbed photoplotter connected to a PDP-something that you had to boot from reel-to-reel tape. The plot happened overnight, and then had to be developed next day in the darkroom, and then if you were lucky you'd have transparencies of each layer of the PCB that you could send off to a local company who would etch you a single PCB for a lot of money in a few weeks. Even that wasn't trouble-free, since PCBs can have manufacturing faults, or you could screw up when soldering the components to the board, or your design could be wrong.
It was very rare that I as the most junior person was allowed to go the PCB route. I think for my boards it happened only once on an ECL design that simply wouldn't have been possible with wirewrap. Although I was tasked with doing the transparencies for other team members. Since I was being paid only £40/week through a government benefits scheme, it was much cheaper to pay for my time than to pay an external company.
Also as the other reply says, I used CPLDs a lot which were much faster to iterate. With practice you could pull out the QFP package, put it in the programmer, recompile and upload the new logic, and put it back into the board in an hour. Luxury!
We never used pencil and paper (except for notes). The software for drawing schematics, laying out PCBs, making netlists, and compiling CPLDs was pretty advanced even then. Although all of it was horribly proprietary. No KiCAD for you.
Taught you to check everything in your design early and often.
The existence of wire wrap tells you a lot how painfully tedious it was to layout PCB's at the time. I did a couple of wire wrap boards. But eventually just started soldering wire wrap wire to the sockets. By the early 90's it was faster to layout a PCB and have it fabbed. Bonus you could outsource that and use the 3-4 weeks to do less tedious things.
This is just around the time that programmable logic became readily available. It'd be much easier to iterate with that than wiring up logic gates. Last 30 years you can do all this debugging with simulations and then test using FPGA's.
Present day: I can fabricate a wire-wrap version from my PCB footprints, much faster than I can route the PCB!
With wire-wrap, you can route multiple traces between the same pins, or I like to neatly bundle a whole bus' worth.
I'm far more pleased with the results of my wire-wrap, than the quality of my SMT soldering once I get a PCB made.
I had a tutor for wire-wrap in the 1980s. But I'm self-taught in PCB routing, and I start it over at least 3 times.
I'm old-young enough to be aware of the evolution of minicomputers implemented in MSI TTL with wire-wrap (1970) to VLSI integration (1975). Examples are the LSI-11 and TMS9900.
My first home-brew micro was done in 1987 using the Radio Shack hand-tool and an OK Industries' motorized wrap gun.
Now, I add CPLD to my wire-wrap designs! Just like on an iterated PCB, you must lock the physical pins to functions.
Never seen wire wrapped boards besides photos of this and maybe some other early micro. So of course I had to do a little search and one of the first results has Bil Herd from Commodore (Plus/4, C128...) explaining it.
https://www.youtube.com/watch?v=IXvEDM-m9CE
I love this, it’s like a holy relic. :D
I’m amazed someone preserved that! In whose ownership is it currently?
I remember seeing old photos of the prototype. I assumed it was lost decades ago.
Dale Luck (from the original team) is preserving it, apparently.
When I was a kid I had a Dragon32 and my little brother had an Amiga 500. I thought it was so cool with the demos and the sound but he was always getting worms that spread via floppy disc.
Yup, heh. And you'd infect all your floppies if you just warm booted, and put in the next game.
Flipping the read only tab on every floppy was the first thing I did, after my first infection.
there was a distinct floppy sound when the filesystem was updated after a write.
i noticed my first virus pretty quickly, and even though i couldn't remove it, i could disable it in some files that i couldn't reproduce (no internet back then, and i was on the wrong side of the iron curtain as a child)
The failure of the Amiga and the near-failure and resurrection of Apple is what makes me believe in parallel universes/alternate timelines more than anything :)
I was there and it was glorious to watch. Beautiful to see an interesting part of history up close.
If they could build that then, imagine what Ben Eater could build today. ;)
Wire wrap is/was an underrated prototyping technique prior to PCB automation. Nasa flew missions with wire wrap boards.
I had an old Kenwood amplifier for years that had wire wrap board to board connectors; it worked great.
If you really take seriously what you could have done for a home computer if you had started with fully integrated chips, its actually insane.
Imagine if you had an Amiga Chipset and you had combined it with a RISC like chip. If you did that in late 70s with 3.5μm HMOS (like 68k). The resulting system would be insane, in terms of performance to cost. You could outperform minicomputers that cost 10-100x more.
The ARM2 like chip and the complete Amiga chipset seem to have less transistors then a single 68k, so the price off such a system would be very low. And we can see that with the Amiga, what really blows my mind is how cheap Amiga ended up being, an unbelievable achievment.
Its seem the issue really was the the companies that had the resources to do that amount of chip design knowlage and finances were not interest in making a home computers/workstation. Workstation ended up being made by startups who didn't have the resources to do so much costume work. Appollo was a split-off group DEC because DEC was not interested in workstations. IBM was just to slow and couldn't really do prodcut design, and we all know how the eventually got around that problem with the PC. Apple for the Mac did try to do one ambitious chip with VSLI but didn't end up using it.
The split between computer companies and chip design company was just to big to get the needed amount of integration, and there was clearly a lacking vision for what home computer could be. Jobs vision for the Macintosh went in the right direction, but really Jay Mine had the right vision, and he had it because he build a computer for himself. He wanted a home comptuer that was fast, had a proper operating system and enough media capability to run a flight simulator software. Sadly manamgent most of the time wanted him to develop a console and later when they allowed a home computer they didn't share his full vision.
But then also actually plulling this vision off, multi-chip costume design with very few resources is just an amazing achievment. And many of the people didn't even have that much knowlage in chip design, there was a lot of competition for chip design people. Getting into Commmodore where they had the actual semiconductor teams to get these designs over the line was lucky, many other companies who could have bought them might have messed this up.
In a perfect world you add ARM2-like RISC chip, a Sun-like costume MMU to something like Amiga Chipset and you move computing forward by 10+ years. In reality the exact opposite won, a 16-bit PC that had basically no costume design in it what so ever.
For 1985, I'd think that you'd also have to imagine that DRAM was cheaper than what it really was.
RISC and especially MMU with paging increase memory requirements. For comparison, the first Amiga was designed for 128K, got 256K. Linux/M68K with MMU on the Amiga required 4MB to be usable.
I wonder how much faster the ARM2 would have been compared to the 68k in a first-generation Amiga. The Amiga's chip memory only delivered 7 MBytes/s, shared between the CPU and the chipset! With its 32-bit instruction words, the ARM2 would have been very far from its theoretical performance.
> I wonder how much faster the ARM2 would have been compared to the 68k in a first-generation Amiga.
Considerably faster. I looked at both (and the ST) and bought an Archimedes.
ARM chips benchmarked from the ARM2 up to the RasPi 3B+:
https://stardot.org.uk/forums/viewtopic.php?t=20379
68000 benchmarks around that time:
http://www.faqs.org/faqs/motorola/68k-chips-faq/
ARM2:
Dhrystone/sec 5463
68000 @ 8MHz:
Dhrystones
68000 2100
MIPS
https://en.wikichip.org/wiki/acorn/microarchitectures/arm2
https://en.wikipedia.org/wiki/Instructions_per_second
ARM2: from 6 to 10 million instructions per second, depending on instruction mix
68000: 1.4 MIPS typical.
(For comparison: Intel 8086 at the same speed, something like 300 Whetstones, 0.5 MIPS. So either of them stomped all over a comparable x86 machine from that time.)
So, very roughly, ARM2 was between 2-3x faster in typical use.
Note:
. Neither CPU could do FP in hardware.
. Neither had cache memory.
. The Amiga had a lot of complex hardware acceleration for graphics; the original ARM2 machines from Acorn (Archimedes A2305, A310, A400) had essentially none.
So, Amiga games could do things that on the Arc required raw CPU, typically done careful hand-coded assembler.
Well - I used Archimedes computers with ARM2 and owned an Amiga 500+ and honestly, I couldn't tell you the Arcie was faster. It certainly didn't have the custom chips, so it is probably not a fair comparison.
"custom" not "costume" (...I'm only adding this note to help, not criticise)
> Imagine if you had an Amiga Chipset and you had combined it with a RISC like chip
This was the plan for the successor machine, codenamed HOMBRE and never released.
An Amiga-like chipset closely coupled with an HP PA-RISC CPU.
https://en.wikipedia.org/wiki/Amiga_Hombre_chipset
A little more info in German:
https://www.amigawiki.org/doku.php?id=de:models:hombre
Reminds me of that LLM computer built in Minecraft that was discussed here a few weeks back.
Looks like a beehive. Very cool
a C-hive
Does it still boot?
At VCFMW it was not powered on.