HEB (header-binary) is a minimalistic format for processing multidimensional data. A file in heb format has two sections: an ascii header of fixed length that contains a hash table of pairs attribute_name/attribute value and a section that keeps the data as a flat binary array. This is one of the simplest solutions for storing and dissemination of scientific datasets. Why HEB? There exist HDF, NetCDF, FITS and other formats that are widely used in scientific data analysis. HDF, NETCDF, FITS provide functionality that is not needed for more than 95% users. This is a conservative estimate. Quite possible 99.99%. This extra unnecessary functionality comes with a hefty price tag: 1) HDF, NetCDF are data archives. Usually one file contains many datasets. A user application almost never needs all the datasets, although a user has to download and store the data that he is not going to use. Handling large dataset is not easy and the necessity to handle junk which bloats the dataset by a factor 2, 5, sometimes even 100 makes this task much more difficult and expensive. 1) HDF, NetCDF are not human readable. You cannot look at a file and say what is inside. The format is not documented. Although it is possible to unearth the format description or reverse engineer the format, this is not a task that even an advanced user is contemplating. The only practical way to read/write the data is to use special libraries. 2) HDF library (version 1.8.10) has 491 files with C-code and 209 files with C-headers, in total 701,508 lines. Netcdf, version 4.2.1 has "only" 284,100 lines of source code. Compilation and patching of these monstrous is not an easy task. 3) Programming using HDF/NETCDF/CFITSIO libraries is a tedious and time consuming task. A typical program that reads/writes HDF/NETCDF/CFITSIO data has 300-700 lines of code. A program that reads a data product A usually cannot be used for reading data product B. 4) HDF, NetCDF, and FITS are slow. Reading/writing of the data in HDF,NetCDF,FITS is at least a factor of 2-3 slower than sequential reading of recording media. If you really, Really, REALLY need fancy features of HDF/NetCDF or FITS, use them. If you need the simplest and the fastest solution, you can use heb. A file in HEB-format keeps data for one scientific dataset, and a set of up to 64 attributes. Four of them a mandatory, other are arbitrary. Heb I/O library has five routines, three of them are user-visible. In total, the library has 534 lines of code, including comments (i.e. more than one thousand times(!!) shorter than HDF). It is written in a portable C and can be called either from C, C++, or Fortran. For format description see file doc/heb_format.txt For using the library see file doc/user_guide.txt For installation instruction see file INSTALL An example of reading/writing data in heb format see directory examples. You can easily write your own program that reads HEB-data. Just keep in mind that the first 2048 bytes keep the ascii header that is followed by the binary section that holds a 4D array of data.