During an SPS burst, the SDPCs simply receive data from the subdetectors and store them in memory. Out of burst, the received data block is checked for consistency and then partitioned. That is, it is subdivided into at most m blocks where each block is assigned to an EBPC. The partition task can be run in two modes. In the static mode, the size of each partition is defined at the start of a run. In the dynamic mode, if a particular EBPC goes down, the partitioning task simply removes it from the list of EBPCs and the partition sizes change for the next burst.
Each sender process maintains a logical connection with a unique receiving process in every working EBPC. Since there are n SDPCs and m EBPCs this means there are then n x m logical connections. In terms of physical connections, there is one single point-to-point full-duplex link from each PC (SDPC or EBPC) to the switch. The sender process then sends all data in its partition to the relevant receiver process. The protocol used is TCP/IP which handles all issues of flow control and ensures data integrity and completeness. Note that at this stage, the event structure is invisible and that the IP packets are not correlated to event fragments.
When the receiver gets the data, it stores them in memory. An event builder task runs in every EBPC and searches through the received data blocks, pulling out a list of pointers for each event which identifies its component fragments. The pointers are stored in a pointer table in shared memory.
After the event building stage, it is possible to apply fast filter algorithms ("Level 2B") to the data stored in memory, rejecting background events.
A disk writer process takes each set of complete pointers from the pointer table and then copies the relevant fragments from memory to disk. The disk files thus consist of reconstituted events. One disk file (burst fragment) corresponds to the data from one burst and contains all the events sent to a given event builder in chronological order.
A CDR (Central Data Recording) process then starts to move completed disk files to the CS-2. To do this, it simply sends the data back through the switch (again using TCP/IP over Fast Ethernet, FDDI and Gigabit Ethernet) to the computer centre which is ca. 7km away from the experimental area. As soon as a burst fragment file has been transferred successfully it is deleted from the EBPC's disk buffer. An automatic retry mechanism asynchronously takes care of failed transfers.
In the computer centre the burst fragment files are combined to complete bursts, processed by a software filter and fed to the online reconstruction program. Eventually, raw or filtered burst files and the results of the online reconstrcution pass are written to Redwood STK tapes using the computer centre's tape robot.
The drawing below shows a simplified example of the event building process with 3 SDPCs and 2 EBPCs.
get an idea of the full complexity of the real system by clicking here