jpayne@69: #!/usr/bin/env python jpayne@69: # Copyright 2010-2018 by Peter Cock. jpayne@69: # All rights reserved. jpayne@69: # jpayne@69: # This file is part of the Biopython distribution and governed by your jpayne@69: # choice of the "Biopython License Agreement" or the "BSD 3-Clause License". jpayne@69: # Please see the LICENSE file that should have been included as part of this jpayne@69: # package. jpayne@69: r"""Read and write BGZF compressed files (the GZIP variant used in BAM). jpayne@69: jpayne@69: The SAM/BAM file format (Sequence Alignment/Map) comes in a plain text jpayne@69: format (SAM), and a compressed binary format (BAM). The latter uses a jpayne@69: modified form of gzip compression called BGZF (Blocked GNU Zip Format), jpayne@69: which can be applied to any file format to provide compression with jpayne@69: efficient random access. BGZF is described together with the SAM/BAM jpayne@69: file format at https://samtools.sourceforge.net/SAM1.pdf jpayne@69: jpayne@69: Please read the text below about 'virtual offsets' before using BGZF jpayne@69: files for random access. jpayne@69: jpayne@69: Aim of this module jpayne@69: ------------------ jpayne@69: The Python gzip library can be used to read BGZF files, since for jpayne@69: decompression they are just (specialised) gzip files. What this jpayne@69: module aims to facilitate is random access to BGZF files (using the jpayne@69: 'virtual offset' idea), and writing BGZF files (which means using jpayne@69: suitably sized gzip blocks and writing the extra 'BC' field in the jpayne@69: gzip headers). As in the gzip library, the zlib library is used jpayne@69: internally. jpayne@69: jpayne@69: In addition to being required for random access to and writing of jpayne@69: BAM files, the BGZF format can also be used on other sequential jpayne@69: data (in the sense of one record after another), such as most of jpayne@69: the sequence data formats supported in Bio.SeqIO (like FASTA, jpayne@69: FASTQ, GenBank, etc) or large MAF alignments. jpayne@69: jpayne@69: The Bio.SeqIO indexing functions use this module to support BGZF files. jpayne@69: jpayne@69: Technical Introduction to BGZF jpayne@69: ------------------------------ jpayne@69: The gzip file format allows multiple compressed blocks, each of which jpayne@69: could be a stand alone gzip file. As an interesting bonus, this means jpayne@69: you can use Unix ``cat`` to combine two or more gzip files into one by jpayne@69: concatenating them. Also, each block can have one of several compression jpayne@69: levels (including uncompressed, which actually takes up a little bit jpayne@69: more space due to the gzip header). jpayne@69: jpayne@69: What the BAM designers realised was that while random access to data jpayne@69: stored in traditional gzip files was slow, breaking the file into jpayne@69: gzip blocks would allow fast random access to each block. To access jpayne@69: a particular piece of the decompressed data, you just need to know jpayne@69: which block it starts in (the offset of the gzip block start), and jpayne@69: how far into the (decompressed) contents of the block you need to jpayne@69: read. jpayne@69: jpayne@69: One problem with this is finding the gzip block sizes efficiently. jpayne@69: You can do it with a standard gzip file, but it requires every block jpayne@69: to be decompressed -- and that would be rather slow. Additionally jpayne@69: typical gzip files may use very large blocks. jpayne@69: jpayne@69: All that differs in BGZF is that compressed size of each gzip block jpayne@69: is limited to 2^16 bytes, and an extra 'BC' field in the gzip header jpayne@69: records this size. Traditional decompression tools can ignore this, jpayne@69: and unzip the file just like any other gzip file. jpayne@69: jpayne@69: The point of this is you can look at the first BGZF block, find out jpayne@69: how big it is from this 'BC' header, and thus seek immediately to jpayne@69: the second block, and so on. jpayne@69: jpayne@69: The BAM indexing scheme records read positions using a 64 bit jpayne@69: 'virtual offset', comprising ``coffset << 16 | uoffset``, where ``coffset`` jpayne@69: is the file offset of the BGZF block containing the start of the read jpayne@69: (unsigned integer using up to 64-16 = 48 bits), and ``uoffset`` is the jpayne@69: offset within the (decompressed) block (unsigned 16 bit integer). jpayne@69: jpayne@69: This limits you to BAM files where the last block starts by 2^48 jpayne@69: bytes, or 256 petabytes, and the decompressed size of each block jpayne@69: is at most 2^16 bytes, or 64kb. Note that this matches the BGZF jpayne@69: 'BC' field size which limits the compressed size of each block to jpayne@69: 2^16 bytes, allowing for BAM files to use BGZF with no gzip jpayne@69: compression (useful for intermediate files in memory to reduce jpayne@69: CPU load). jpayne@69: jpayne@69: Warning about namespaces jpayne@69: ------------------------ jpayne@69: It is considered a bad idea to use "from XXX import ``*``" in Python, because jpayne@69: it pollutes the namespace. This is a real issue with Bio.bgzf (and the jpayne@69: standard Python library gzip) because they contain a function called open jpayne@69: i.e. Suppose you do this: jpayne@69: jpayne@69: >>> from Bio.bgzf import * jpayne@69: >>> print(open.__module__) jpayne@69: Bio.bgzf jpayne@69: jpayne@69: Or, jpayne@69: jpayne@69: >>> from gzip import * jpayne@69: >>> print(open.__module__) jpayne@69: gzip jpayne@69: jpayne@69: Notice that the open function has been replaced. You can "fix" this if you jpayne@69: need to by importing the built-in open function: jpayne@69: jpayne@69: >>> from builtins import open jpayne@69: jpayne@69: However, what we recommend instead is to use the explicit namespace, e.g. jpayne@69: jpayne@69: >>> from Bio import bgzf jpayne@69: >>> print(bgzf.open.__module__) jpayne@69: Bio.bgzf jpayne@69: jpayne@69: jpayne@69: Examples jpayne@69: -------- jpayne@69: This is an ordinary GenBank file compressed using BGZF, so it can jpayne@69: be decompressed using gzip, jpayne@69: jpayne@69: >>> import gzip jpayne@69: >>> handle = gzip.open("GenBank/NC_000932.gb.bgz", "r") jpayne@69: >>> assert 0 == handle.tell() jpayne@69: >>> line = handle.readline() jpayne@69: >>> assert 80 == handle.tell() jpayne@69: >>> line = handle.readline() jpayne@69: >>> assert 143 == handle.tell() jpayne@69: >>> data = handle.read(70000) jpayne@69: >>> assert 70143 == handle.tell() jpayne@69: >>> handle.close() jpayne@69: jpayne@69: We can also access the file using the BGZF reader - but pay jpayne@69: attention to the file offsets which will be explained below: jpayne@69: jpayne@69: >>> handle = BgzfReader("GenBank/NC_000932.gb.bgz", "r") jpayne@69: >>> assert 0 == handle.tell() jpayne@69: >>> print(handle.readline().rstrip()) jpayne@69: LOCUS NC_000932 154478 bp DNA circular PLN 15-APR-2009 jpayne@69: >>> assert 80 == handle.tell() jpayne@69: >>> print(handle.readline().rstrip()) jpayne@69: DEFINITION Arabidopsis thaliana chloroplast, complete genome. jpayne@69: >>> assert 143 == handle.tell() jpayne@69: >>> data = handle.read(70000) jpayne@69: >>> assert 987828735 == handle.tell() jpayne@69: >>> print(handle.readline().rstrip()) jpayne@69: f="GeneID:844718" jpayne@69: >>> print(handle.readline().rstrip()) jpayne@69: CDS complement(join(84337..84771,85454..85843)) jpayne@69: >>> offset = handle.seek(make_virtual_offset(55074, 126)) jpayne@69: >>> print(handle.readline().rstrip()) jpayne@69: 68521 tatgtcattc gaaattgtat aaagacaact cctatttaat agagctattt gtgcaagtat jpayne@69: >>> handle.close() jpayne@69: jpayne@69: Notice the handle's offset looks different as a BGZF file. This jpayne@69: brings us to the key point about BGZF, which is the block structure: jpayne@69: jpayne@69: >>> handle = open("GenBank/NC_000932.gb.bgz", "rb") jpayne@69: >>> for values in BgzfBlocks(handle): jpayne@69: ... print("Raw start %i, raw length %i; data start %i, data length %i" % values) jpayne@69: Raw start 0, raw length 15073; data start 0, data length 65536 jpayne@69: Raw start 15073, raw length 17857; data start 65536, data length 65536 jpayne@69: Raw start 32930, raw length 22144; data start 131072, data length 65536 jpayne@69: Raw start 55074, raw length 22230; data start 196608, data length 65536 jpayne@69: Raw start 77304, raw length 14939; data start 262144, data length 43478 jpayne@69: Raw start 92243, raw length 28; data start 305622, data length 0 jpayne@69: >>> handle.close() jpayne@69: jpayne@69: In this example the first three blocks are 'full' and hold 65536 bytes jpayne@69: of uncompressed data. The fourth block isn't full and holds 43478 bytes. jpayne@69: Finally there is a special empty fifth block which takes 28 bytes on jpayne@69: disk and serves as an 'end of file' (EOF) marker. If this is missing, jpayne@69: it is possible your BGZF file is incomplete. jpayne@69: jpayne@69: By reading ahead 70,000 bytes we moved into the second BGZF block, jpayne@69: and at that point the BGZF virtual offsets start to look different jpayne@69: to a simple offset into the decompressed data as exposed by the gzip jpayne@69: library. jpayne@69: jpayne@69: As an example, consider seeking to the decompressed position 196734. jpayne@69: Since 196734 = 65536 + 65536 + 65536 + 126 = 65536*3 + 126, this jpayne@69: is equivalent to jumping the first three blocks (which in this jpayne@69: specific example are all size 65536 after decompression - which jpayne@69: does not always hold) and starting at byte 126 of the fourth block jpayne@69: (after decompression). For BGZF, we need to know the fourth block's jpayne@69: offset of 55074 and the offset within the block of 126 to get the jpayne@69: BGZF virtual offset. jpayne@69: jpayne@69: >>> print(55074 << 16 | 126) jpayne@69: 3609329790 jpayne@69: >>> print(bgzf.make_virtual_offset(55074, 126)) jpayne@69: 3609329790 jpayne@69: jpayne@69: Thus for this BGZF file, decompressed position 196734 corresponds jpayne@69: to the virtual offset 3609329790. However, another BGZF file with jpayne@69: different contents would have compressed more or less efficiently, jpayne@69: so the compressed blocks would be different sizes. What this means jpayne@69: is the mapping between the uncompressed offset and the compressed jpayne@69: virtual offset depends on the BGZF file you are using. jpayne@69: jpayne@69: If you are accessing a BGZF file via this module, just use the jpayne@69: handle.tell() method to note the virtual offset of a position you jpayne@69: may later want to return to using handle.seek(). jpayne@69: jpayne@69: The catch with BGZF virtual offsets is while they can be compared jpayne@69: (which offset comes first in the file), you cannot safely subtract jpayne@69: them to get the size of the data between them, nor add/subtract jpayne@69: a relative offset. jpayne@69: jpayne@69: Of course you can parse this file with Bio.SeqIO using BgzfReader, jpayne@69: although there isn't any benefit over using gzip.open(...), unless jpayne@69: you want to index BGZF compressed sequence files: jpayne@69: jpayne@69: >>> from Bio import SeqIO jpayne@69: >>> handle = BgzfReader("GenBank/NC_000932.gb.bgz") jpayne@69: >>> record = SeqIO.read(handle, "genbank") jpayne@69: >>> handle.close() jpayne@69: >>> print(record.id) jpayne@69: NC_000932.1 jpayne@69: jpayne@69: Text Mode jpayne@69: --------- jpayne@69: jpayne@69: Like the standard library gzip.open(...), the BGZF code defaults to opening jpayne@69: files in binary mode. jpayne@69: jpayne@69: You can request the file be opened in text mode, but beware that this is hard jpayne@69: coded to the simple "latin1" (aka "iso-8859-1") encoding (which includes all jpayne@69: the ASCII characters), which works well with most Western European languages. jpayne@69: However, it is not fully compatible with the more widely used UTF-8 encoding. jpayne@69: jpayne@69: In variable width encodings like UTF-8, some single characters in the unicode jpayne@69: text output are represented by multiple bytes in the raw binary form. This is jpayne@69: problematic with BGZF, as we cannot always decode each block in isolation - a jpayne@69: single unicode character could be split over two blocks. This can even happen jpayne@69: with fixed width unicode encodings, as the BGZF block size is not fixed. jpayne@69: jpayne@69: Therefore, this module is currently restricted to only support single byte jpayne@69: unicode encodings, such as ASCII, "latin1" (which is a superset of ASCII), or jpayne@69: potentially other character maps (not implemented). jpayne@69: jpayne@69: Furthermore, unlike the default text mode on Python 3, we do not attempt to jpayne@69: implement universal new line mode. This transforms the various operating system jpayne@69: new line conventions like Windows (CR LF or "\r\n"), Unix (just LF, "\n"), or jpayne@69: old Macs (just CR, "\r"), into just LF ("\n"). Here we have the same problem - jpayne@69: is "\r" at the end of a block an incomplete Windows style new line? jpayne@69: jpayne@69: Instead, you will get the CR ("\r") and LF ("\n") characters as is. jpayne@69: jpayne@69: If your data is in UTF-8 or any other incompatible encoding, you must use jpayne@69: binary mode, and decode the appropriate fragments yourself. jpayne@69: """ jpayne@69: jpayne@69: import struct jpayne@69: import sys jpayne@69: import zlib jpayne@69: jpayne@69: from builtins import open as _open jpayne@69: jpayne@69: _bgzf_magic = b"\x1f\x8b\x08\x04" jpayne@69: _bgzf_header = b"\x1f\x8b\x08\x04\x00\x00\x00\x00\x00\xff\x06\x00\x42\x43\x02\x00" jpayne@69: _bgzf_eof = b"\x1f\x8b\x08\x04\x00\x00\x00\x00\x00\xff\x06\x00BC\x02\x00\x1b\x00\x03\x00\x00\x00\x00\x00\x00\x00\x00\x00" jpayne@69: _bytes_BC = b"BC" jpayne@69: jpayne@69: jpayne@69: def open(filename, mode="rb"): jpayne@69: r"""Open a BGZF file for reading, writing or appending. jpayne@69: jpayne@69: If text mode is requested, in order to avoid multi-byte characters, this is jpayne@69: hard coded to use the "latin1" encoding, and "\r" and "\n" are passed as is jpayne@69: (without implementing universal new line mode). jpayne@69: jpayne@69: If your data is in UTF-8 or any other incompatible encoding, you must use jpayne@69: binary mode, and decode the appropriate fragments yourself. jpayne@69: """ jpayne@69: if "r" in mode.lower(): jpayne@69: return BgzfReader(filename, mode) jpayne@69: elif "w" in mode.lower() or "a" in mode.lower(): jpayne@69: return BgzfWriter(filename, mode) jpayne@69: else: jpayne@69: raise ValueError(f"Bad mode {mode!r}") jpayne@69: jpayne@69: jpayne@69: def make_virtual_offset(block_start_offset, within_block_offset): jpayne@69: """Compute a BGZF virtual offset from block start and within block offsets. jpayne@69: jpayne@69: The BAM indexing scheme records read positions using a 64 bit jpayne@69: 'virtual offset', comprising in C terms: jpayne@69: jpayne@69: block_start_offset << 16 | within_block_offset jpayne@69: jpayne@69: Here block_start_offset is the file offset of the BGZF block jpayne@69: start (unsigned integer using up to 64-16 = 48 bits), and jpayne@69: within_block_offset within the (decompressed) block (unsigned jpayne@69: 16 bit integer). jpayne@69: jpayne@69: >>> make_virtual_offset(0, 0) jpayne@69: 0 jpayne@69: >>> make_virtual_offset(0, 1) jpayne@69: 1 jpayne@69: >>> make_virtual_offset(0, 2**16 - 1) jpayne@69: 65535 jpayne@69: >>> make_virtual_offset(0, 2**16) jpayne@69: Traceback (most recent call last): jpayne@69: ... jpayne@69: ValueError: Require 0 <= within_block_offset < 2**16, got 65536 jpayne@69: jpayne@69: >>> 65536 == make_virtual_offset(1, 0) jpayne@69: True jpayne@69: >>> 65537 == make_virtual_offset(1, 1) jpayne@69: True jpayne@69: >>> 131071 == make_virtual_offset(1, 2**16 - 1) jpayne@69: True jpayne@69: jpayne@69: >>> 6553600000 == make_virtual_offset(100000, 0) jpayne@69: True jpayne@69: >>> 6553600001 == make_virtual_offset(100000, 1) jpayne@69: True jpayne@69: >>> 6553600010 == make_virtual_offset(100000, 10) jpayne@69: True jpayne@69: jpayne@69: >>> make_virtual_offset(2**48, 0) jpayne@69: Traceback (most recent call last): jpayne@69: ... jpayne@69: ValueError: Require 0 <= block_start_offset < 2**48, got 281474976710656 jpayne@69: jpayne@69: """ jpayne@69: if within_block_offset < 0 or within_block_offset >= 65536: jpayne@69: raise ValueError( jpayne@69: "Require 0 <= within_block_offset < 2**16, got %i" % within_block_offset jpayne@69: ) jpayne@69: if block_start_offset < 0 or block_start_offset >= 281474976710656: jpayne@69: raise ValueError( jpayne@69: "Require 0 <= block_start_offset < 2**48, got %i" % block_start_offset jpayne@69: ) jpayne@69: return (block_start_offset << 16) | within_block_offset jpayne@69: jpayne@69: jpayne@69: def split_virtual_offset(virtual_offset): jpayne@69: """Divides a 64-bit BGZF virtual offset into block start & within block offsets. jpayne@69: jpayne@69: >>> (100000, 0) == split_virtual_offset(6553600000) jpayne@69: True jpayne@69: >>> (100000, 10) == split_virtual_offset(6553600010) jpayne@69: True jpayne@69: jpayne@69: """ jpayne@69: start = virtual_offset >> 16 jpayne@69: return start, virtual_offset ^ (start << 16) jpayne@69: jpayne@69: jpayne@69: def BgzfBlocks(handle): jpayne@69: """Low level debugging function to inspect BGZF blocks. jpayne@69: jpayne@69: Expects a BGZF compressed file opened in binary read mode using jpayne@69: the builtin open function. Do not use a handle from this bgzf jpayne@69: module or the gzip module's open function which will decompress jpayne@69: the file. jpayne@69: jpayne@69: Returns the block start offset (see virtual offsets), the block jpayne@69: length (add these for the start of the next block), and the jpayne@69: decompressed length of the blocks contents (limited to 65536 in jpayne@69: BGZF), as an iterator - one tuple per BGZF block. jpayne@69: jpayne@69: >>> from builtins import open jpayne@69: >>> handle = open("SamBam/ex1.bam", "rb") jpayne@69: >>> for values in BgzfBlocks(handle): jpayne@69: ... print("Raw start %i, raw length %i; data start %i, data length %i" % values) jpayne@69: Raw start 0, raw length 18239; data start 0, data length 65536 jpayne@69: Raw start 18239, raw length 18223; data start 65536, data length 65536 jpayne@69: Raw start 36462, raw length 18017; data start 131072, data length 65536 jpayne@69: Raw start 54479, raw length 17342; data start 196608, data length 65536 jpayne@69: Raw start 71821, raw length 17715; data start 262144, data length 65536 jpayne@69: Raw start 89536, raw length 17728; data start 327680, data length 65536 jpayne@69: Raw start 107264, raw length 17292; data start 393216, data length 63398 jpayne@69: Raw start 124556, raw length 28; data start 456614, data length 0 jpayne@69: >>> handle.close() jpayne@69: jpayne@69: Indirectly we can tell this file came from an old version of jpayne@69: samtools because all the blocks (except the final one and the jpayne@69: dummy empty EOF marker block) are 65536 bytes. Later versions jpayne@69: avoid splitting a read between two blocks, and give the header jpayne@69: its own block (useful to speed up replacing the header). You jpayne@69: can see this in ex1_refresh.bam created using samtools 0.1.18: jpayne@69: jpayne@69: samtools view -b ex1.bam > ex1_refresh.bam jpayne@69: jpayne@69: >>> handle = open("SamBam/ex1_refresh.bam", "rb") jpayne@69: >>> for values in BgzfBlocks(handle): jpayne@69: ... print("Raw start %i, raw length %i; data start %i, data length %i" % values) jpayne@69: Raw start 0, raw length 53; data start 0, data length 38 jpayne@69: Raw start 53, raw length 18195; data start 38, data length 65434 jpayne@69: Raw start 18248, raw length 18190; data start 65472, data length 65409 jpayne@69: Raw start 36438, raw length 18004; data start 130881, data length 65483 jpayne@69: Raw start 54442, raw length 17353; data start 196364, data length 65519 jpayne@69: Raw start 71795, raw length 17708; data start 261883, data length 65411 jpayne@69: Raw start 89503, raw length 17709; data start 327294, data length 65466 jpayne@69: Raw start 107212, raw length 17390; data start 392760, data length 63854 jpayne@69: Raw start 124602, raw length 28; data start 456614, data length 0 jpayne@69: >>> handle.close() jpayne@69: jpayne@69: The above example has no embedded SAM header (thus the first block jpayne@69: is very small at just 38 bytes of decompressed data), while the next jpayne@69: example does (a larger block of 103 bytes). Notice that the rest of jpayne@69: the blocks show the same sizes (they contain the same read data): jpayne@69: jpayne@69: >>> handle = open("SamBam/ex1_header.bam", "rb") jpayne@69: >>> for values in BgzfBlocks(handle): jpayne@69: ... print("Raw start %i, raw length %i; data start %i, data length %i" % values) jpayne@69: Raw start 0, raw length 104; data start 0, data length 103 jpayne@69: Raw start 104, raw length 18195; data start 103, data length 65434 jpayne@69: Raw start 18299, raw length 18190; data start 65537, data length 65409 jpayne@69: Raw start 36489, raw length 18004; data start 130946, data length 65483 jpayne@69: Raw start 54493, raw length 17353; data start 196429, data length 65519 jpayne@69: Raw start 71846, raw length 17708; data start 261948, data length 65411 jpayne@69: Raw start 89554, raw length 17709; data start 327359, data length 65466 jpayne@69: Raw start 107263, raw length 17390; data start 392825, data length 63854 jpayne@69: Raw start 124653, raw length 28; data start 456679, data length 0 jpayne@69: >>> handle.close() jpayne@69: jpayne@69: """ jpayne@69: if isinstance(handle, BgzfReader): jpayne@69: raise TypeError("Function BgzfBlocks expects a binary handle") jpayne@69: data_start = 0 jpayne@69: while True: jpayne@69: start_offset = handle.tell() jpayne@69: try: jpayne@69: block_length, data = _load_bgzf_block(handle) jpayne@69: except StopIteration: jpayne@69: break jpayne@69: data_len = len(data) jpayne@69: yield start_offset, block_length, data_start, data_len jpayne@69: data_start += data_len jpayne@69: jpayne@69: jpayne@69: def _load_bgzf_block(handle, text_mode=False): jpayne@69: """Load the next BGZF block of compressed data (PRIVATE). jpayne@69: jpayne@69: Returns a tuple (block size and data), or at end of file jpayne@69: will raise StopIteration. jpayne@69: """ jpayne@69: magic = handle.read(4) jpayne@69: if not magic: jpayne@69: # End of file - should we signal this differently now? jpayne@69: # See https://www.python.org/dev/peps/pep-0479/ jpayne@69: raise StopIteration jpayne@69: if magic != _bgzf_magic: jpayne@69: raise ValueError( jpayne@69: r"A BGZF (e.g. a BAM file) block should start with " jpayne@69: r"%r, not %r; handle.tell() now says %r" jpayne@69: % (_bgzf_magic, magic, handle.tell()) jpayne@69: ) jpayne@69: gzip_mod_time, gzip_extra_flags, gzip_os, extra_len = struct.unpack( jpayne@69: ">> from builtins import open jpayne@69: >>> handle = open("SamBam/ex1.bam", "rb") jpayne@69: >>> for values in BgzfBlocks(handle): jpayne@69: ... print("Raw start %i, raw length %i; data start %i, data length %i" % values) jpayne@69: Raw start 0, raw length 18239; data start 0, data length 65536 jpayne@69: Raw start 18239, raw length 18223; data start 65536, data length 65536 jpayne@69: Raw start 36462, raw length 18017; data start 131072, data length 65536 jpayne@69: Raw start 54479, raw length 17342; data start 196608, data length 65536 jpayne@69: Raw start 71821, raw length 17715; data start 262144, data length 65536 jpayne@69: Raw start 89536, raw length 17728; data start 327680, data length 65536 jpayne@69: Raw start 107264, raw length 17292; data start 393216, data length 63398 jpayne@69: Raw start 124556, raw length 28; data start 456614, data length 0 jpayne@69: >>> handle.close() jpayne@69: jpayne@69: Now let's see how to use this block information to jump to jpayne@69: specific parts of the decompressed BAM file: jpayne@69: jpayne@69: >>> handle = BgzfReader("SamBam/ex1.bam", "rb") jpayne@69: >>> assert 0 == handle.tell() jpayne@69: >>> magic = handle.read(4) jpayne@69: >>> assert 4 == handle.tell() jpayne@69: jpayne@69: So far nothing so strange, we got the magic marker used at the jpayne@69: start of a decompressed BAM file, and the handle position makes jpayne@69: sense. Now however, let's jump to the end of this block and 4 jpayne@69: bytes into the next block by reading 65536 bytes, jpayne@69: jpayne@69: >>> data = handle.read(65536) jpayne@69: >>> len(data) jpayne@69: 65536 jpayne@69: >>> assert 1195311108 == handle.tell() jpayne@69: jpayne@69: Expecting 4 + 65536 = 65540 were you? Well this is a BGZF 64-bit jpayne@69: virtual offset, which means: jpayne@69: jpayne@69: >>> split_virtual_offset(1195311108) jpayne@69: (18239, 4) jpayne@69: jpayne@69: You should spot 18239 as the start of the second BGZF block, while jpayne@69: the 4 is the offset into this block. See also make_virtual_offset, jpayne@69: jpayne@69: >>> make_virtual_offset(18239, 4) jpayne@69: 1195311108 jpayne@69: jpayne@69: Let's jump back to almost the start of the file, jpayne@69: jpayne@69: >>> make_virtual_offset(0, 2) jpayne@69: 2 jpayne@69: >>> handle.seek(2) jpayne@69: 2 jpayne@69: >>> handle.close() jpayne@69: jpayne@69: Note that you can use the max_cache argument to limit the number of jpayne@69: BGZF blocks cached in memory. The default is 100, and since each jpayne@69: block can be up to 64kb, the default cache could take up to 6MB of jpayne@69: RAM. The cache is not important for reading through the file in one jpayne@69: pass, but is important for improving performance of random access. jpayne@69: """ jpayne@69: jpayne@69: def __init__(self, filename=None, mode="r", fileobj=None, max_cache=100): jpayne@69: r"""Initialize the class for reading a BGZF file. jpayne@69: jpayne@69: You would typically use the top level ``bgzf.open(...)`` function jpayne@69: which will call this class internally. Direct use is discouraged. jpayne@69: jpayne@69: Either the ``filename`` (string) or ``fileobj`` (input file object in jpayne@69: binary mode) arguments must be supplied, but not both. jpayne@69: jpayne@69: Argument ``mode`` controls if the data will be returned as strings in jpayne@69: text mode ("rt", "tr", or default "r"), or bytes binary mode ("rb" jpayne@69: or "br"). The argument name matches the built-in ``open(...)`` and jpayne@69: standard library ``gzip.open(...)`` function. jpayne@69: jpayne@69: If text mode is requested, in order to avoid multi-byte characters, jpayne@69: this is hard coded to use the "latin1" encoding, and "\r" and "\n" jpayne@69: are passed as is (without implementing universal new line mode). There jpayne@69: is no ``encoding`` argument. jpayne@69: jpayne@69: If your data is in UTF-8 or any other incompatible encoding, you must jpayne@69: use binary mode, and decode the appropriate fragments yourself. jpayne@69: jpayne@69: Argument ``max_cache`` controls the maximum number of BGZF blocks to jpayne@69: cache in memory. Each can be up to 64kb thus the default of 100 blocks jpayne@69: could take up to 6MB of RAM. This is important for efficient random jpayne@69: access, a small value is fine for reading the file in one pass. jpayne@69: """ jpayne@69: # TODO - Assuming we can seek, check for 28 bytes EOF empty block jpayne@69: # and if missing warn about possible truncation (as in samtools)? jpayne@69: if max_cache < 1: jpayne@69: raise ValueError("Use max_cache with a minimum of 1") jpayne@69: # Must open the BGZF file in binary mode, but we may want to jpayne@69: # treat the contents as either text or binary (unicode or jpayne@69: # bytes under Python 3) jpayne@69: if filename and fileobj: jpayne@69: raise ValueError("Supply either filename or fileobj, not both") jpayne@69: # Want to reject output modes like w, a, x, + jpayne@69: if mode.lower() not in ("r", "tr", "rt", "rb", "br"): jpayne@69: raise ValueError( jpayne@69: "Must use a read mode like 'r' (default), 'rt', or 'rb' for binary" jpayne@69: ) jpayne@69: # If an open file was passed, make sure it was opened in binary mode. jpayne@69: if fileobj: jpayne@69: if fileobj.read(0) != b"": jpayne@69: raise ValueError("fileobj not opened in binary mode") jpayne@69: handle = fileobj jpayne@69: else: jpayne@69: handle = _open(filename, "rb") jpayne@69: self._text = "b" not in mode.lower() jpayne@69: if self._text: jpayne@69: self._newline = "\n" jpayne@69: else: jpayne@69: self._newline = b"\n" jpayne@69: self._handle = handle jpayne@69: self.max_cache = max_cache jpayne@69: self._buffers = {} jpayne@69: self._block_start_offset = None jpayne@69: self._block_raw_length = None jpayne@69: self._load_block(handle.tell()) jpayne@69: jpayne@69: def _load_block(self, start_offset=None): jpayne@69: if start_offset is None: jpayne@69: # If the file is being read sequentially, then _handle.tell() jpayne@69: # should be pointing at the start of the next block. jpayne@69: # However, if seek has been used, we can't assume that. jpayne@69: start_offset = self._block_start_offset + self._block_raw_length jpayne@69: if start_offset == self._block_start_offset: jpayne@69: self._within_block_offset = 0 jpayne@69: return jpayne@69: elif start_offset in self._buffers: jpayne@69: # Already in cache jpayne@69: self._buffer, self._block_raw_length = self._buffers[start_offset] jpayne@69: self._within_block_offset = 0 jpayne@69: self._block_start_offset = start_offset jpayne@69: return jpayne@69: # Must hit the disk... first check cache limits, jpayne@69: while len(self._buffers) >= self.max_cache: jpayne@69: # TODO - Implement LRU cache removal? jpayne@69: self._buffers.popitem() jpayne@69: # Now load the block jpayne@69: handle = self._handle jpayne@69: if start_offset is not None: jpayne@69: handle.seek(start_offset) jpayne@69: self._block_start_offset = handle.tell() jpayne@69: try: jpayne@69: block_size, self._buffer = _load_bgzf_block(handle, self._text) jpayne@69: except StopIteration: jpayne@69: # EOF jpayne@69: block_size = 0 jpayne@69: if self._text: jpayne@69: self._buffer = "" jpayne@69: else: jpayne@69: self._buffer = b"" jpayne@69: self._within_block_offset = 0 jpayne@69: self._block_raw_length = block_size jpayne@69: # Finally save the block in our cache, jpayne@69: self._buffers[self._block_start_offset] = self._buffer, block_size jpayne@69: jpayne@69: def tell(self): jpayne@69: """Return a 64-bit unsigned BGZF virtual offset.""" jpayne@69: if 0 < self._within_block_offset and self._within_block_offset == len( jpayne@69: self._buffer jpayne@69: ): jpayne@69: # Special case where we're right at the end of a (non empty) block. jpayne@69: # For non-maximal blocks could give two possible virtual offsets, jpayne@69: # but for a maximal block can't use 65536 as the within block jpayne@69: # offset. Therefore for consistency, use the next block and a jpayne@69: # within block offset of zero. jpayne@69: return (self._block_start_offset + self._block_raw_length) << 16 jpayne@69: else: jpayne@69: # return make_virtual_offset(self._block_start_offset, jpayne@69: # self._within_block_offset) jpayne@69: # TODO - Include bounds checking as in make_virtual_offset? jpayne@69: return (self._block_start_offset << 16) | self._within_block_offset jpayne@69: jpayne@69: def seek(self, virtual_offset): jpayne@69: """Seek to a 64-bit unsigned BGZF virtual offset.""" jpayne@69: # Do this inline to avoid a function call, jpayne@69: # start_offset, within_block = split_virtual_offset(virtual_offset) jpayne@69: start_offset = virtual_offset >> 16 jpayne@69: within_block = virtual_offset ^ (start_offset << 16) jpayne@69: if start_offset != self._block_start_offset: jpayne@69: # Don't need to load the block if already there jpayne@69: # (this avoids a function call since _load_block would do nothing) jpayne@69: self._load_block(start_offset) jpayne@69: if start_offset != self._block_start_offset: jpayne@69: raise ValueError("start_offset not loaded correctly") jpayne@69: if within_block > len(self._buffer): jpayne@69: if not (within_block == 0 and len(self._buffer) == 0): jpayne@69: raise ValueError( jpayne@69: "Within offset %i but block size only %i" jpayne@69: % (within_block, len(self._buffer)) jpayne@69: ) jpayne@69: self._within_block_offset = within_block jpayne@69: # assert virtual_offset == self.tell(), \ jpayne@69: # "Did seek to %i (%i, %i), but tell says %i (%i, %i)" \ jpayne@69: # % (virtual_offset, start_offset, within_block, jpayne@69: # self.tell(), self._block_start_offset, jpayne@69: # self._within_block_offset) jpayne@69: return virtual_offset jpayne@69: jpayne@69: def read(self, size=-1): jpayne@69: """Read method for the BGZF module.""" jpayne@69: if size < 0: jpayne@69: raise NotImplementedError("Don't be greedy, that could be massive!") jpayne@69: jpayne@69: result = "" if self._text else b"" jpayne@69: while size and self._block_raw_length: jpayne@69: if self._within_block_offset + size <= len(self._buffer): jpayne@69: # This may leave us right at the end of a block jpayne@69: # (lazy loading, don't load the next block unless we have too) jpayne@69: data = self._buffer[ jpayne@69: self._within_block_offset : self._within_block_offset + size jpayne@69: ] jpayne@69: self._within_block_offset += size jpayne@69: if not data: jpayne@69: raise ValueError("Must be at least 1 byte") jpayne@69: result += data jpayne@69: break jpayne@69: else: jpayne@69: data = self._buffer[self._within_block_offset :] jpayne@69: size -= len(data) jpayne@69: self._load_block() # will reset offsets jpayne@69: result += data jpayne@69: jpayne@69: return result jpayne@69: jpayne@69: def readline(self): jpayne@69: """Read a single line for the BGZF file.""" jpayne@69: result = "" if self._text else b"" jpayne@69: while self._block_raw_length: jpayne@69: i = self._buffer.find(self._newline, self._within_block_offset) jpayne@69: # Three cases to consider, jpayne@69: if i == -1: jpayne@69: # No newline, need to read in more data jpayne@69: data = self._buffer[self._within_block_offset :] jpayne@69: self._load_block() # will reset offsets jpayne@69: result += data jpayne@69: elif i + 1 == len(self._buffer): jpayne@69: # Found new line, but right at end of block (SPECIAL) jpayne@69: data = self._buffer[self._within_block_offset :] jpayne@69: # Must now load the next block to ensure tell() works jpayne@69: self._load_block() # will reset offsets jpayne@69: if not data: jpayne@69: raise ValueError("Must be at least 1 byte") jpayne@69: result += data jpayne@69: break jpayne@69: else: jpayne@69: # Found new line, not at end of block (easy case, no IO) jpayne@69: data = self._buffer[self._within_block_offset : i + 1] jpayne@69: self._within_block_offset = i + 1 jpayne@69: # assert data.endswith(self._newline) jpayne@69: result += data jpayne@69: break jpayne@69: jpayne@69: return result jpayne@69: jpayne@69: def __next__(self): jpayne@69: """Return the next line.""" jpayne@69: line = self.readline() jpayne@69: if not line: jpayne@69: raise StopIteration jpayne@69: return line jpayne@69: jpayne@69: def __iter__(self): jpayne@69: """Iterate over the lines in the BGZF file.""" jpayne@69: return self jpayne@69: jpayne@69: def close(self): jpayne@69: """Close BGZF file.""" jpayne@69: self._handle.close() jpayne@69: self._buffer = None jpayne@69: self._block_start_offset = None jpayne@69: self._buffers = None jpayne@69: jpayne@69: def seekable(self): jpayne@69: """Return True indicating the BGZF supports random access.""" jpayne@69: return True jpayne@69: jpayne@69: def isatty(self): jpayne@69: """Return True if connected to a TTY device.""" jpayne@69: return False jpayne@69: jpayne@69: def fileno(self): jpayne@69: """Return integer file descriptor.""" jpayne@69: return self._handle.fileno() jpayne@69: jpayne@69: def __enter__(self): jpayne@69: """Open a file operable with WITH statement.""" jpayne@69: return self jpayne@69: jpayne@69: def __exit__(self, type, value, traceback): jpayne@69: """Close a file with WITH statement.""" jpayne@69: self.close() jpayne@69: jpayne@69: jpayne@69: class BgzfWriter: jpayne@69: """Define a BGZFWriter object.""" jpayne@69: jpayne@69: def __init__(self, filename=None, mode="w", fileobj=None, compresslevel=6): jpayne@69: """Initilize the class.""" jpayne@69: if filename and fileobj: jpayne@69: raise ValueError("Supply either filename or fileobj, not both") jpayne@69: # If an open file was passed, make sure it was opened in binary mode. jpayne@69: if fileobj: jpayne@69: if fileobj.read(0) != b"": jpayne@69: raise ValueError("fileobj not opened in binary mode") jpayne@69: handle = fileobj jpayne@69: else: jpayne@69: if "w" not in mode.lower() and "a" not in mode.lower(): jpayne@69: raise ValueError(f"Must use write or append mode, not {mode!r}") jpayne@69: if "a" in mode.lower(): jpayne@69: handle = _open(filename, "ab") jpayne@69: else: jpayne@69: handle = _open(filename, "wb") jpayne@69: self._text = "b" not in mode.lower() jpayne@69: self._handle = handle jpayne@69: self._buffer = b"" jpayne@69: self.compresslevel = compresslevel jpayne@69: jpayne@69: def _write_block(self, block): jpayne@69: """Write provided data to file as a single BGZF compressed block (PRIVATE).""" jpayne@69: # print("Saving %i bytes" % len(block)) jpayne@69: if len(block) > 65536: jpayne@69: raise ValueError(f"{len(block)} Block length > 65536") jpayne@69: # Giving a negative window bits means no gzip/zlib headers, jpayne@69: # -15 used in samtools jpayne@69: c = zlib.compressobj( jpayne@69: self.compresslevel, zlib.DEFLATED, -15, zlib.DEF_MEM_LEVEL, 0 jpayne@69: ) jpayne@69: compressed = c.compress(block) + c.flush() jpayne@69: del c jpayne@69: if len(compressed) > 65536: jpayne@69: raise RuntimeError( jpayne@69: "TODO - Didn't compress enough, try less data in this block" jpayne@69: ) jpayne@69: crc = zlib.crc32(block) jpayne@69: # Should cope with a mix of Python platforms... jpayne@69: if crc < 0: jpayne@69: crc = struct.pack("= 65536: jpayne@69: self._write_block(self._buffer[:65536]) jpayne@69: self._buffer = self._buffer[65536:] jpayne@69: jpayne@69: def flush(self): jpayne@69: """Flush data explicitally.""" jpayne@69: while len(self._buffer) >= 65536: jpayne@69: self._write_block(self._buffer[:65535]) jpayne@69: self._buffer = self._buffer[65535:] jpayne@69: self._write_block(self._buffer) jpayne@69: self._buffer = b"" jpayne@69: self._handle.flush() jpayne@69: jpayne@69: def close(self): jpayne@69: """Flush data, write 28 bytes BGZF EOF marker, and close BGZF file. jpayne@69: jpayne@69: samtools will look for a magic EOF marker, just a 28 byte empty BGZF jpayne@69: block, and if it is missing warns the BAM file may be truncated. In jpayne@69: addition to samtools writing this block, so too does bgzip - so this jpayne@69: implementation does too. jpayne@69: """ jpayne@69: if self._buffer: jpayne@69: self.flush() jpayne@69: self._handle.write(_bgzf_eof) jpayne@69: self._handle.flush() jpayne@69: self._handle.close() jpayne@69: jpayne@69: def tell(self): jpayne@69: """Return a BGZF 64-bit virtual offset.""" jpayne@69: return make_virtual_offset(self._handle.tell(), len(self._buffer)) jpayne@69: jpayne@69: def seekable(self): jpayne@69: """Return True indicating the BGZF supports random access.""" jpayne@69: # Not seekable, but we do support tell... jpayne@69: return False jpayne@69: jpayne@69: def isatty(self): jpayne@69: """Return True if connected to a TTY device.""" jpayne@69: return False jpayne@69: jpayne@69: def fileno(self): jpayne@69: """Return integer file descriptor.""" jpayne@69: return self._handle.fileno() jpayne@69: jpayne@69: def __enter__(self): jpayne@69: """Open a file operable with WITH statement.""" jpayne@69: return self jpayne@69: jpayne@69: def __exit__(self, type, value, traceback): jpayne@69: """Close a file with WITH statement.""" jpayne@69: self.close() jpayne@69: jpayne@69: jpayne@69: if __name__ == "__main__": jpayne@69: if len(sys.argv) > 1: jpayne@69: print("Call this with no arguments and pipe uncompressed data in on stdin") jpayne@69: print("and it will produce BGZF compressed data on stdout. e.g.") jpayne@69: print("") jpayne@69: print("./bgzf.py < example.fastq > example.fastq.bgz") jpayne@69: print("") jpayne@69: print("The extension convention of *.bgz is to distinugish these from *.gz") jpayne@69: print("used for standard gzipped files without the block structure of BGZF.") jpayne@69: print("You can use the standard gunzip command to decompress BGZF files,") jpayne@69: print("if it complains about the extension try something like this:") jpayne@69: print("") jpayne@69: print("cat example.fastq.bgz | gunzip > example.fastq") jpayne@69: print("") jpayne@69: print("See also the tool bgzip that comes with samtools") jpayne@69: sys.exit(0) jpayne@69: jpayne@69: # Ensure we have binary mode handles jpayne@69: # (leave stderr as default text mode) jpayne@69: stdin = sys.stdin.buffer jpayne@69: stdout = sys.stdout.buffer jpayne@69: jpayne@69: sys.stderr.write("Producing BGZF output from stdin...\n") jpayne@69: w = BgzfWriter(fileobj=stdout) jpayne@69: while True: jpayne@69: data = stdin.read(65536) jpayne@69: w.write(data) jpayne@69: if not data: jpayne@69: break jpayne@69: # Doing close will write an empty BGZF block as EOF marker: jpayne@69: w.close() jpayne@69: sys.stderr.write("BGZF data produced\n")