| Forum Home | ||||
| Press F1 | ||||
| Thread ID: 28735 | 2002-12-29 18:31:00 | Corporate Computer Systems vs Home PCs | Steve_L (763) | Press F1 |
| Post ID | Timestamp | Content | User | ||
| 109844 | 2002-12-29 18:31:00 | Here is a question that has been on my mind for years: How much more powerful are the huge computers that big corporations use? Certainly they must have mega memory HDs, but what about CPUs and other items that relate to raw speed? Following on this line, I wonder how the corporate systems of the past compare to home PCs today. For instance, back in the early 1990's I worked in a large national corporation overseas. Our desktop PCs had CPUs of 16 MHz and I remember that those that had the new 25 MHz chips were gloating how much better they were. Now, the computer room was two floors below where I worked and I would walk by the big window of this room wondering how powerful the system was. Back then would they have had the processing speed that present day home PCs have now, say 2 GHz CPUs? And 120 Gb HDs, or more? Maybe there is a web site that you know of that compares such things? Thanks. - Steve |
Steve_L (763) | ||
| 109845 | 2002-12-29 18:32:00 | PS And what the big ISPs use, like XTRA....? How would their computer systems compare in speed and memory? |
Steve_L (763) | ||
| 109846 | 2002-12-29 20:35:00 | I don't know that much about corporate systems, but in late 1960's I used IBM360 mainframe at University of New South Wales in Sydney. This particular machine was reputed to be the most powerful computer in Australia at that time. I understand a very few corporates in Oz also used 360's at the time but with somewhat lower specs. The Univ of NSW machine had 1.2mb of memory (that's 1200kb of memory). I don't recall it being called ram back then, just "memory". Of course the software then was much less demanding of system resources than it is now ... |
rugila (214) | ||
| 109847 | 2002-12-29 20:47:00 | Your average x86 server will not be considerably faster than a good desktop. The major differences are usually the matter of thermal management, reliability, and hard disk performance. Eg systems will possibly be SAN attached over fiber channel, with large RAID sets. Therefore a quick workstation may be able to keep up with some servers processing wise, but the disk performance won't be adaquate for many roles such as database serving or Exchange e-mail. Most new servers will be dual processor which doesn't yeild large performance increases on desktops, but will work well with many server applications (SQL, Exchange etc) Only really exotic supercomputers will be largely hugely different in performance, though large x86 clusters are keeping pace with these too nowadays. 64 bit Sun systems used to rule the roost for small enterprise database serving but HP are attacking Solaris systems with Linux at the moment. |
BIFF (1) | ||
| 109848 | 2002-12-29 21:13:00 | BIFF is right, at work, I've got a Celeron 633Mhz, with it being prolly one of the lower spec PC's. They recently got in some more 2Ghz P4's with GeForce 4's and 256MB DDR RAM! Minimum ram in the PC's is 256MB in all the ones at work I think. There might still be one or two with only 128MB RAM! |
Chilling_Silence (9) | ||
| 109849 | 2002-12-30 04:14:00 | The main emphasis on servers is the data handling: the disks are not on IDE interfaces :D. The computer is not tied up producing a GUI, so it can do useful work. Mainframes didn't have all that much memory: memory was very expensive. A big Burroughs I used had 3 MB of memory (core and planar) and a 5 MHz CPU, and a 10 MHz arithmetic processsor. That was a very fast machine ... 30 or more interactive sessions, and a heavy batch load (jobs started from cards, often). It could access individual records on the big data files I worked on very quickly, although it had an I/O processor which hnadled the disks). Compiling was very fast : about 1000 lines of Algol source took around a second of CPU time, so I was amazed when I saw how slow PC compilers were. C compilers on PCs take a looooong time. A Prime started with 2 MB, but was expanded with another 2 MB (1 board about 18" x 20"). It could handle a fair number of interactive users, but most things involving data were slow. (That boasted an I/O bandwidth of 80 MHz ... which probably meant 8 bit transfers at 10 MHz ;-)) I beleive Google use a Beowulf cluster of 8000 Pentiums to search the WWW to make their data base. The International airline reservation service, and the credit card companies still use big mainframes. They have to have the reliability. |
Graham L (2) | ||
| 109850 | 2002-12-30 08:32:00 | "I beleive Google use a Beowulf cluster of 8000 Pentiums to search the WWW to make their data base." Thanks for the replies. The above item about Google is interesting...! Not too sure what some of the jargon means....have heard of "mainframes" for years but still do not know exactly what they are - as relative to a home PC. Still wondering about the big ISPs here in NZ. Anyone know? I guess I could ring up XTRA and Paradise (my two ISPs) and ask the techie of the moment...! |
Steve_L (763) | ||
| 109851 | 2002-12-30 09:18:00 | In 1968 I was programming a Mainframe Burroughs B3500 using COBOL. Punched paper tape, Punched cards, Tape readers and random access disks. The disk drive was approximately the size of a washing machine. Line printers at 132 characters per line. Most input was via punched cards using EBCIDIC format. I wrote progams that would validate the data then transfer to disk. Had to write programs that would sort the data (access key parameters ) etc. MIS (Management Information Systems) were the in thing. Tape was sequential access and disks were sequential or random. Most applications were databases (Financial, Warranty, Parts etc. ) Spreadsheets weren't thought of then. Visicalc came later. :-) If you have a 286 4.7Mhz then that would eat what I had to work with at first. Enjoy!!! :-) Oh yes.... The mainframe would not compile a program if it was more than 12Kb. |
Elephant (599) | ||
| 109852 | 2002-12-30 10:02:00 | > I don't know that much about corporate systems, but > in late 1960's I used IBM360 mainframe at University > of New South Wales in Sydney. This particular machine > was reputed to be the most powerful computer in > Australia at that time. In the early 60s, I had a tour of the National Security Agency in Washington and was impressed by their computer basement. Vast !!! Big special purpose machines, not bearing maker names familiar in the computer world. Later went to GCHQ in UK, whose latest monster had come from Controldata.(Now equiv to say a 386 ??) DSD in Melbourne had earlier used Collerob and Infuse - just about equivalent to an XT Turbo ...Some F1 readers will have plenty of grunt .... |
TonyF (246) | ||
| 109853 | 2002-12-30 10:10:00 | > a > I beleive Google use a Beowulf cluster of 8000 > Pentiums to search the WWW to make their data base. Google started of with Celeron 450s, and then added a few thousand Celeron 1000s,witrh 256 RAM and 60 gigs storage. Now at 10,000 in 3 locations, and growing. Using cutdown Red hat. |
TonyF (246) | ||
| 1 2 | |||||