FUSE micro-opt benchmarking #5110
Labels
Comments
|
data should be |
|
Seems like Python is clever enough to not copy if the copy would be the whole bytestring: And that is quite often the case in that code fragment. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment


If somebody has some time for FUSE benchmarking:
The first 2 changes remove selective caching only of partially read chunks and cache removal of fully read chunks. While this sounds obvious when thinking about sequential reads, it maybe is counterproductive for repeating chunks (like all-zero chunks).
The 3rd change tries to avoid creating a copy of data just for the sake of slicing it. Not sure if this helps (it only happens at first/last chunk within a read) or is counterproductive due to the additional line of code.
If someone wants to benchmark these (and maybe also try with a bigger sized self.data_cache), that would be helpful!
Try:
The text was updated successfully, but these errors were encountered: