HPCN 94

The following article was published in "SUPERCOMPUTER EUROPEAN WATCH" July 1994

Editorial

Ad Emmen, Jaap Hollenberg (Stichting Supercomputing, Support Services (4-S)

Immediately after the publication of the Rubbia report, little attention was paid to the "N" of the HPCN; computing and networking were considered to be two different things, only glued together in that particular abbreviation. This is now changing.

One important showcase where the indispensability of networking in the high-performance computing area was shown, was the Technology Demonstrators Display (TDD) at the Munich HPCN Europe '94 event. At this a technology transfer show, the machines in the exhibition booths were connected by a powerful network, which was put in place by a team from CERN. This network was based on HiPPI, a high-speed network technology, with speeds of up to 100 Mbyte/s, making it one of the fastest available. HiPPI is often employed in supercomputer centres, to link mass storage and compute servers and it can be used to turn clusters in parallel computers.

To get a feel for the organisation of the first European demonstration HiPPI-net, we talked to a number of the participators on he past, the present and the future of HiPPI-net, hardware and software.

HiPPI-net - the back bone of HPCN Europe

A novelty at HPCN Europe 1994 in Munich was the HiPPI-net, part of the Technology Demonstrators Display (TDD). To further enhance the interaction between users and suppliers the Munich HPCN Europe '94 event featured a Technology Demonstrators Display (TDD): a technology transfer show, which was the responsibility of SARA, the Dutch National Supercomputer Centre in Amsterdam. In the TDD, eight European HPC centres demonstrated real applications on real high-performance computing machines (see also our May issue). It was done by connecting the machines in the exhibition booths by a powerful network, based on HiPPI technology, which was put in place by a team from CERN.

Figure: the HiPPI-net at HPCN Europe 1994 in Munich. The total amount of data transmitted during the show was 25 Tbyte. The same amount of data were sent around the network for testing.

HiPPI is a high-speed network technology, with speeds of up to 100 Mbyte/s, making it one of the fastest available. HiPPI is often employed in supercomputer centres, to link mass storage and compute servers and it can be used to turn clusters in parallel computers. HiPPI is limited by its concept to 25 metres although you can use several repeaters - which were used during HPCN Europe. An alternative is the so- called HiPPI-to-fibre optics converter and extender. This works like a modem, and was put in place by BCP (Broadcast Communications Products) represented by Laser 2000. Typically this fibre optics can be 100 metres or more long, and can go up to 10 kilometres. The one used was 100 metres long, equivalent to 5 metres of HiPPI cable in volume.

To get a feel for the organisation of the first European demonstration HiPPI-net, we talked to a number of the participators about the past, the present and the future of HiPPI-net, hardware and software.

The term HPCN was coined in Europe - modelled after the American HPCC. In the beginning, immediately after the publication of the Rubbia report, little attention was paid to the "N" in HPCN. In fact one of the most widely heard criticisms about the Rubbia report was that it presented HPC & N as an integrated effort, which, according to the critics, was incorrect:

networks are just networks, and you can use them for "everything", including non-HPC applications.

The Rubbia report called also for "demonstrators of the new HPCN technology", to show the real benefits (and problems) of HPCN. This was the inspiration point for the Technology Demonstrators. From the very beginning, it was clear that the success of the TDD and in fact of the Exhibition too, was dependent on a good fast and reliable network that employed leading-edge technology. A typical HPCN application needs different hardware and software platforms, ranging from mass storage to supercomputing capacity and from numerical software to visualisation packages. The only way to let it work is by using a fast network. Robert McLaren and Arie van Praag, from CERN, started the mission impossible of putting the first large European HiPPI network together: finding scepticism when they took their first steps, but enthusiasm after completion.

The HiPPI-net showed that, although there is HPC without "N" and HPN without "C", the two are for a large part intimately connected. In fact the HiPPI-net glued so much powerful equipment together, that during of the event, HPCN Europe '94 housed one of the larger HPCN centres in Germany if not Europe.

Nearly everything went well in the construction of the network. One problem was the scale of the floor plan fortunately, this was wrong in the right sense, in that the building was actually smaller and the cables were therefore oversized.

The CERN team

Arie van Praag and Robert McLaren told us that CERN was, from the start, very enthusiastic about collaborating in an European environment to make the HiPPI-net a success. They first made contact with a number of exhibitors, or potential exhibitors, to see whether they would be interested and once over a certain threshold, saw there would be enough participants to do something interesting. They then went ahead with the actual planning.

Key to the success was the contact with IOSC and NSC, companies who helped by providing the switches needed as only one switch could be made available from CERN. In addition, both NOVA (IOSC) and NSC provided a number of extra cables. Exhibiting firms were joining and, finally they got a really good network together and started to work out all kind of addressing schemes etc. in order to avoid (as much as possible) surprises.

The Laser 2000 firm faxed they would like to be hooked up to the network. "This was exactly the thing that we needed to create a parallel path which goes from the switches where the Cray is connected to where the framebuffer is connected in parallel to the back bone" commented Van Praag.

Before the meeting in Munich, the CERN team spent half a day testing an enormous amount of cables, and equipment. They then started to build the physical network by first connecting the two cables for the next connection together and testing it in a loop-back mode. For the last and furthest away connection, 50 metre cables had to be used which, according to specialists, should work but in a loop-back test with 100 metres of cable, did not. This offered a good opportunity for Van Praag and McLaren to open the box with tricks, and to use a prototype of the Los Alamos HiPPI repeater. "We put that between the two cables and the errors were gone."

CERN staff set up the HiPPI-net in two days and by Sunday evening, 8 o'clock, everything was up and running; the event started on Monday morning. "It is a real performance," commented one of the participants, "it is working, and it has worked very well since the very beginning of the show. I have transmitted 100 Tbyte of data without an error at all: it is fantastic." The CERN team thus earned (rightfully) a lot of praise, also from people not connected to the network themselves. In general many had thought that they were never going to make the network ("it is too complex"). In hindsight, they must have regretted not to have brought/connected a machine.

Next year, the CERN team will - eventually - do some things The main philosophy, the main principle and how it was built, was right different, but "the main philosophy, the principle, and how it was built, was right" commented McLaren. Maybe the TDD network should include some new technology next year, and McLaren is thinking about new technologies like fibre channel.

The partners

Francois Gaullier, from Hewlett-Packard France told us that his company developed the HiPPI interface for their S700 series of workstations. At HPCN Europe they showed this product to the world of supercomputer applications. Naturally they decided to come to Munich and participate in the HiPPI-net that CERN was putting together. "Via the HiPPI network we establish a connection between various computers to show HiPPI is a reliable product and we can really do a good job with the network."

HP had workstations models 735 and 755 available with the fully TCP/IP compliant HiPPI interface running. The performance we have on the work stations is 20 Mbyte/s over TCP/IP, and the latency is 100 microseconds: a very good performance number. In the HiPPI network one HP workstation was configured with the switches addresses of all the other machines and this machine would be able to answer any other machine. "We are the first to implement this functionality of HiPPI," Gaullier stated. "This really makes the HiPPI interface look like an Ethernet or FDDI. The administration is the same," he added.

Looking into the future, Gaullier observed that HP is aware that HiPPI is the only fast solution that is available on the market Within 1-1,5 year, the targeted network interface is fibre channel and this is especially true for mass storage devices and also workstation clusters, "but meanwhile we are pushing HiPPI and we are selling clusters based on HiPPI. Because there is really a demand of interested people."

Cray Research ran a live EL 92 with two processors, 512 Mbyte of memory, 22 Gbyte disk. The system was running Unicos, and connected to a local Ethernet. Some other vendors were connected to the EL: TeamQuest and Debis software. The Cray people used an FDDI connection to Stuttgart (RUS) on the TDD and were connected to the HiPPI network. Cray also showed some of their own applications, e.g. Ensight (formerly called MPGS) and Unichem, which is a tool for Molecular Dynamics. The frond-end runs on an Silicon Graphics workstation, the back-end on the EL. There were also Ethernet connected to Cray offices in Munich and to the States in Eagan.

"We have quite a number of applications running on the EL," said Ulrich Schafer, from Cray Research Germany, "but of course, maybe the most important thing is the HiPPI connection that ran without a flaw." Schafer think it was one of the highlights of the show to see the framebuffer that takes frames from Ensight and shows it on the IOSC framebuffer. "I think it is quite remarkable because here we see a speed of 800 Mbyte/s. We are pleased to see this performance on the EL with the HiPPI," he commented.

Schafer thinks that "Cray should participate in next year's network effort, it is worthwhile." He notices that many customers and prospects are really talking about ATM. "Maybe we will be in a position to show ATM live on our machine next year. We have Beta-test software and we have hardware already."

Nova is an enterprise located in Geneva. Their mission is to bring high-speed networking and storage to the major European and industrial computing centres. "Our niche is to test new technology, together with early adapters like CERN" said Nova's Arnaud Saint Girons. In their booth they showed - "but we have a range of products which are in different technologies" - HiPPI technology products from HP and from IOSC.

For the HiPPI-net, Nova displayed two major applications: one on Hewlett-Packard machines and one on the Cray. The HP machines were put together in a cluster through the HiPPI switch. It was a mirror installation of the one existing at CERN - they recently moved from a HP farm connecting the Ethernet now to HiPPI using the IOSC HiPPI switch and "what we see is an increase in communication speed," observes Saint Girons. With Ethernet, the speed was 0.8 Mbyte/s; with HiPPI the peak performance for the same application was 23 Mbyte/s. "So the bottleneck is now not the network anymore, it depends on the price of the processor."

The other application on the Nova booth came from Cray. The C92 in the Cray booth ran through one switch and through a device called framebuffer provided by IOSC, coupled to a Sony monitor. This high- resolution monitor displays at a very fast speed from the Cray. Saint Girons was enthusiastic,"the statistics show that it runs at 52 Mbyte/s: it is only limited by the power of the Cray, If they go to a bigger Cray-90 they will probably get 70-80 Mbyte/s. The theoretical speed of the HiPPI is 100 Mbyte/s, I think it is very powerful and it is very impressive for our visitors."

Mike McKeown from Network Systems, UK, was personally involved in HPCN Europe 1994 from the very beginning. "The CERN people told us they were interested in setting up a HiPPI network and we decided that we could supply switches and cables. As a hardware networking company we cannot provide applications as such, but just provided the infrastructure for the people to run the application over.

They provided two of their PS32 switches, and some 32 cables because the CERN people were not sure that everybody would have access to cables. The idea was to have spares around ("better too many than too few") so every individual vendor connecting to the HiPPI-net could be cabled even with the rather restricted distance of the individual links.

"This is the first in Europe," commented McKeown, "it seems very successful, everything went smoothly and there are a lot of demonstrations going on. There were Gbit/s of bandwidth and it was set up in a couple of days."

Next year, in Milan, Network Systems will "certainly come", probably in collaboration with Maximum Strategies, a fairly new relationship. Finally, he states - backed up by the successful TDD - that "networking and computing are now closely linked together; the idea of the basic terminal connected directly to your computer is pretty well dead and buried. So I think it is pretty false to have just computing, you need the networking too, because virtually everything is now done through some form of networks."r

Supercomputer European Watch. Volume 5 Issue 7. July 1994

This is one of the CERN High Speed Interconnect pages - 12 January 1995 - Arie van Praag