mirror of
https://github.com/torvalds/linux.git
synced 2026-04-18 06:44:00 -04:00
btrfs: introduce btrfs_bio_for_each_block() helper
Currently if we want to iterate a bio in block unit, we do something
like this:
while (iter->bi_size) {
struct bio_vec bv = bio_iter_iovec();
/* Do something with using the bv */
bio_advance_iter_single(&bbio->bio, iter, sectorsize);
}
That's fine for now, but it will not handle future bs > ps, as
bio_iter_iovec() returns a single-page bvec, meaning the bv_len will not
exceed page size.
This means the code using that bv can only handle a block if bs <= ps.
To address this problem and handle future bs > ps cases better:
- Introduce a helper btrfs_bio_for_each_block()
Instead of bio_vec, which has single and multiple page version and
multiple page version has quite some limits, use my favorite way to
represent a block, phys_addr_t.
For bs <= ps cases, nothing is changed, except we will do a very
small overhead to convert phys_addr_t to a folio, then use the proper
folio helpers to handle the possible highmem cases.
For bs > ps cases, all blocks will be backed by large folios, meaning
every folio will cover at least one block. And still use proper folio
helpers to handle highmem cases.
With phys_addr_t, we will handle both large folio and highmem
properly. So there is no better single variable to present a btrfs
block than phys_addr_t.
- Extract the data block csum calculation into a helper
The new helper, btrfs_calculate_block_csum() will be utilized by
btrfs_csum_one_bio().
- Use btrfs_bio_for_each_block() to replace existing call sites
Including:
* index_one_bio() from raid56.c
Very straight-forward.
* btrfs_check_read_bio()
Also update repair_one_sector() to grab the folio using phys_addr_t,
and do extra checks to make sure the folio covers at least one
block.
We do not need to bother bv_len at all now.
* btrfs_csum_one_bio()
Now we can move the highmem handling into a dedicated helper,
calculate_block_csum(), and use btrfs_bio_for_each_block() helper.
There is one exception in btrfs_decompress_buf2page(), which is copying
decompressed data into the original bio, which is not iterating using
block size thus we don't need to bother.
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
This commit is contained in:
@@ -1208,17 +1208,16 @@ static void index_one_bio(struct btrfs_raid_bio *rbio, struct bio *bio)
|
||||
const u32 sectorsize = rbio->bioc->fs_info->sectorsize;
|
||||
const u32 sectorsize_bits = rbio->bioc->fs_info->sectorsize_bits;
|
||||
struct bvec_iter iter = bio->bi_iter;
|
||||
phys_addr_t paddr;
|
||||
u32 offset = (bio->bi_iter.bi_sector << SECTOR_SHIFT) -
|
||||
rbio->bioc->full_stripe_logical;
|
||||
|
||||
while (iter.bi_size) {
|
||||
btrfs_bio_for_each_block(paddr, bio, &iter, sectorsize) {
|
||||
unsigned int index = (offset >> sectorsize_bits);
|
||||
struct sector_ptr *sector = &rbio->bio_sectors[index];
|
||||
struct bio_vec bv = bio_iter_iovec(bio, iter);
|
||||
|
||||
sector->has_paddr = true;
|
||||
sector->paddr = bvec_phys(&bv);
|
||||
bio_advance_iter_single(bio, &iter, sectorsize);
|
||||
sector->paddr = paddr;
|
||||
offset += sectorsize;
|
||||
}
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user