Abstract : | In pulsar and transient astronomy, high-resolution data generation and efficient processing are crucial. However, archiving the data for legacy purposes as well as for potential scientific uses other than the originally planned, it is important to reduce the data size in ways that maximize the scientific value of the data. As an example, with the currently followed observing setup, the India Pulsar Timing Array (InPTA) experiment will result in acquisition of more than 600 terabytes of data over 10 years using the GMRT. Experiments, such as searching for new pulsars and transients, might acquire even higher volumes of data depending on the scope of the projects. This substantial volume of data underscores the critical need for the development and implementation of an efficient data reduction pipeline. With these motivations, we have developed GBPREP, a Python-based pipeline designed for efficient data reduction of uGMRT pulsar/beam data. Pulsar processing software like gptool, RFIClean, PRESTO, SIGPROC, DSPSR, PSRCHIVE and ugmrt2fil are integrated into GBPREP for performing RFI excision, filterbank conversion, dedispersed time series data preparation, decimation, bandpass correction, and folding. Depending on the original observing setup, the pipeline could easily reduce the data size by an order of magnitude or higher, and still enable a variety of scientific analyses. We will discuss the implementation, accomplishment of data reduction, its useful applications and usefulness through a successful search for giant pulses from millisecond pulsars, using the InPTA data. We show that even with the reduced data size, all the scientific features of original data needed for detection and analysis of giant pulses are preserved. We also illustrate the GBPREP’s efficiency to retain scientific features in the data by comparing the time of arrival of pulses generated from the actual and the reduced data.
|