Forum Home
Press F1
 
Thread ID: 28735 2002-12-29 18:31:00 Corporate Computer Systems vs Home PCs Steve_L (763) Press F1
Post ID Timestamp Content User
109854 2003-01-20 09:20:00 We have a Unisys Clearpath at work (early 1990's era) running 2 units each with 64MB RAM and get this... 66MHz CPU's. Yet this system runs 3+ major national businesses and is meeting retirement. It has SCSI with 8 or so SCSI HDD's and runs SCO UNIX with MCP & LINC. Last uptime was 1 eyar, 7 months & 11 days and only powered down due to a faulty LED... Previously turned off due to an UPS upgrade. On the fly hardware & firmware upgrades & software OS updates.

In regards to Raw speed, CPU's aren't killer important, file servers - if they have good HDD's and SCSI cards can be the scum of the earth PC's yet run for months.

Another main interest is a lot of older systems were also 64bit CPU's - yet home PC's are still only 32bit with 64bit for workstaion use only rolling out. Thanks to the technology by DEC (Digital) and Sun and HP they have kept the up there servers up there....
kiwistag (2875)
109855 2003-01-20 11:59:00 >The main emphasis on servers is the data handling: the disks are not >on IDE interfaces . The computer is not tied up producing a GUI, so it >can do useful work .

The Burroughs B3500 I used in 1968-69 used Core memory, ( I don't remember planar) The SPO (Supervisory Print Out) was like the golfball typewriters . Input was punched cards, punched tape and later Key to tape . Output was awesome . . . . Lineprinter at 132 character per line ( we used to make pretty ASCII pictures ) A disk drive was approx the size of today's washing machine . I was told (then) that to build a chess playing computer it would be the size of St Pauls Cathedral

We used to write programs in "Suites" because of memory . I was programming in COBOL for the Waterfront Industry Commission in Wellington . The Mainframe we leased was in Lower Hutt in Bunny St .

So you would be there in Wgton and write the code which was punched onto 80 col cards . You would then climb on a Unit and go to Lower Hutt to compile . . . . Don't drop the cards unless you can read sequence numbers in EBCDIC (Extended Binary Coded Decimal Interchange)

I remember I wrote a COBOL program to verify the input on cards . . .
Check the date field takes a while (If day is > 28 and month =2 and NOT leap year) etc .
Anyway . . . The program wouldn't compile as I ran out of memory .
Program size 14 Kb . I cut it down by putting in some Assembler routines .

Yep . . . . Those were the days of Batch programming .

Tape drives were sequential processing and disks could be random access ( If you used an actual key based on an algorithm that would not create duplicate records in the database . ) (-:

Them were the days .

I'm happier now as I can play chess here and I don't live in London .





Mainframes didn't have all that much memory: memory was very expensive . A big Burroughs I used had 3 MB of memory (core and planar) and a 5 MHz CPU, and a 10 MHz arithmetic processsor . That was a very fast machine . . . 30 or more interactive sessions, and a heavy batch load (jobs started from cards, often) . It could access individual records on the big data files I worked on very quickly, although it had an I/O processor which hnadled the disks) . Compiling was very fast : about 1000 lines of Algol source took around a second of CPU time, so I was amazed when I saw how slow PC compilers were . C compilers on PCs take a looooong time .

A Prime started with 2 MB, but was expanded with another 2 MB (1 board about 18" x 20") . It could handle a fair number of interactive users, but most things involving data were slow . (That boasted an I/O bandwidth of 80 MHz . . . which probably meant 8 bit transfers at 10 MHz )


I beleive Google use a Beowulf cluster of 8000 Pentiums to search the WWW to make their data base .

The International airline reservation service, and the credit card companies still use big mainframes . They have to have the reliability .
Elephant (599)
1 2