{"id":400,"date":"2023-06-28T22:20:54","date_gmt":"2023-06-28T10:20:54","guid":{"rendered":"https:\/\/rayssite.ddns.net\/?p=400"},"modified":"2023-06-28T22:20:57","modified_gmt":"2023-06-28T10:20:57","slug":"troubleshooting-slow-read-speeds-from-ltfs-volumes-on-windows-10","status":"publish","type":"post","link":"https:\/\/rayssite.ddns.net\/index.php\/2023\/06\/28\/troubleshooting-slow-read-speeds-from-ltfs-volumes-on-windows-10\/","title":{"rendered":"Troubleshooting Slow Read Speeds from LTFS Volumes on Windows 10"},"content":{"rendered":"\n<div id=\"ez-toc-container\" class=\"ez-toc-v2_0_81 ez-toc-wrap-left counter-hierarchy ez-toc-counter ez-toc-grey ez-toc-container-direction\">\n<div class=\"ez-toc-title-container\">\n<p class=\"ez-toc-title\" style=\"cursor:inherit\">Table of Contents<\/p>\n<span class=\"ez-toc-title-toggle\"><a href=\"#\" class=\"ez-toc-pull-right ez-toc-btn ez-toc-btn-xs ez-toc-btn-default ez-toc-toggle\" aria-label=\"Toggle Table of Content\"><span class=\"ez-toc-js-icon-con\"><span class=\"\"><span class=\"eztoc-hide\" style=\"display:none;\">Toggle<\/span><span class=\"ez-toc-icon-toggle-span\"><svg style=\"fill: #999;color:#999\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" class=\"list-377408\" width=\"20px\" height=\"20px\" viewBox=\"0 0 24 24\" fill=\"none\"><path d=\"M6 6H4v2h2V6zm14 0H8v2h12V6zM4 11h2v2H4v-2zm16 0H8v2h12v-2zM4 16h2v2H4v-2zm16 0H8v2h12v-2z\" fill=\"currentColor\"><\/path><\/svg><svg style=\"fill: #999;color:#999\" class=\"arrow-unsorted-368013\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"10px\" height=\"10px\" viewBox=\"0 0 24 24\" version=\"1.2\" baseProfile=\"tiny\"><path d=\"M18.2 9.3l-6.2-6.3-6.2 6.3c-.2.2-.3.4-.3.7s.1.5.3.7c.2.2.4.3.7.3h11c.3 0 .5-.1.7-.3.2-.2.3-.5.3-.7s-.1-.5-.3-.7zM5.8 14.7l6.2 6.3 6.2-6.3c.2-.2.3-.5.3-.7s-.1-.5-.3-.7c-.2-.2-.4-.3-.7-.3h-11c-.3 0-.5.1-.7.3-.2.2-.3.5-.3.7s.1.5.3.7z\"\/><\/svg><\/span><\/span><\/span><\/a><\/span><\/div>\n<nav><ul class='ez-toc-list ez-toc-list-level-1 ' ><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-1\" href=\"https:\/\/rayssite.ddns.net\/index.php\/2023\/06\/28\/troubleshooting-slow-read-speeds-from-ltfs-volumes-on-windows-10\/#Background\" >Background<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-2\" href=\"https:\/\/rayssite.ddns.net\/index.php\/2023\/06\/28\/troubleshooting-slow-read-speeds-from-ltfs-volumes-on-windows-10\/#Troubleshooting_Attempts\" >Troubleshooting Attempts<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-3\" href=\"https:\/\/rayssite.ddns.net\/index.php\/2023\/06\/28\/troubleshooting-slow-read-speeds-from-ltfs-volumes-on-windows-10\/#The_Solution\" >The Solution<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-4\" href=\"https:\/\/rayssite.ddns.net\/index.php\/2023\/06\/28\/troubleshooting-slow-read-speeds-from-ltfs-volumes-on-windows-10\/#Conclusion\" >Conclusion<\/a><\/li><\/ul><\/nav><\/div>\n<h2 class=\"wp-block-heading\"><span class=\"ez-toc-section\" id=\"Background\"><\/span>Background<span class=\"ez-toc-section-end\"><\/span><\/h2>\n\n\n\n<p>A while ago, I got two LTO-6 tape drives for practically free (previous post <a href=\"https:\/\/rayssite.ddns.net\/index.php\/2022\/07\/02\/lto-tape-drive-repair\/\" data-type=\"post\" data-id=\"89\">here<\/a>). Since then, I&#8217;ve been using them for archival storage of video files. I chose LTFS for its open format and independence of backup software, and I&#8217;ve been using it on my primary PC running Windows 10. To make sure I won&#8217;t run into the &#8220;shoe shining&#8221; problem, where the tape drive has to rewind many times when writing small files, I zip up my data into large files (generally >100GB) before putting them on tape. During the zipping process, I also write the zipped files to a quick HDD or a temporary RAID 0 array, so it can keep up with the writing speed of the tape drive. With <code>robocopy<\/code>, I was able to write to LTO-6 tapes at up to 150MB\/s on average, which is pretty close to its specification of 160MB\/s. One day, I had to restore a large file, and I noticed the read speed was ridiculously slow (in the realm of 8-10MB\/s). What&#8217;s going on?<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><span class=\"ez-toc-section\" id=\"Troubleshooting_Attempts\"><\/span>Troubleshooting Attempts<span class=\"ez-toc-section-end\"><\/span><\/h2>\n\n\n\n<p>My troubleshooting began with searching this issue on Google for any clues that might have caused this. There weren&#8217;t many results, and Windows + LTFS didn&#8217;t seem a popular combination. Here are some links I found that might be relevant:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>The op in <a rel=\"noreferrer noopener\" href=\"https:\/\/www.reddit.com\/r\/homelab\/comments\/tqv9hu\/ridicolous_slow_file_retrieval_from_ltfs_lto5\/\" target=\"_blank\">this Reddit post<\/a> seems to have the same problem, although he\/she seems to end up switching to a proprietary backup software instead of LTFS.<\/li><li>The op in <a rel=\"noreferrer noopener\" href=\"https:\/\/www.reddit.com\/r\/DataHoarder\/comments\/11ln998\/very_slow_speeds_on_lto5_tape_drive_on_win11\/\" target=\"_blank\">this Reddit post<\/a> might have the same problem, and he\/she ended up switching back to setting up a Windows 7 machine dedicated to tape backup.<\/li><li>In this Reddit post, someone mentioned the slow speeds with LTFS, but no solutions were mentioned at the time of writing.<\/li><li>This forum thread is pretty old (2012), and no real solution was mentioned at the time of writing.<\/li><li>The op in this forum thread also ended up using LTFS in a Windows 7 environment.<\/li><li>And lastly, this link talked about slow read speeds, but op was using LTFS in RedHat Linux.<\/li><\/ul>\n\n\n\n<p>At first, I thought this may have been caused by me using an improper tool (e.g. drag-and-drop using Windows File Explorer) to write those files to tape or maybe wrong settings when formatting the tape, resulting in file fragmentation. As the tape is a linear medium with enormous seek time, this would cause slow reading speeds from the constant seeking of data. However, as the tape was full and deleting files in LTFS won&#8217;t release the occupied space, I had to format the whole tape and write the files back using <code>robocopy<\/code> again. To my surprise, reading from the tape again still results in a slow speed, so what I thought wasn&#8217;t the actual problem.<\/p>\n\n\n\n<p>During this slow reading, the tape drive was mostly silent, with the occasional spinning of the tape every now and then as data is copied. This makes me think it might be something software-related. Testing with the HPE Library &amp; Tape Tools and Iperius backup software both showed full reading and writing speeds, confirming there were no issues with my hardware setup.<\/p>\n\n\n\n<p>Next, I played around with various settings in the LTFS Configurator tool by HPE, and nothing helped. Then, I noticed the tool supports mounting the volume with verbose logging:<\/p>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-1 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"423\" height=\"570\" data-id=\"426\" data-src=\"https:\/\/rayssite.ddns.net\/wp-content\/uploads\/2023\/06\/LTFS-verbose-logging-settings_1.png\" alt=\"\" class=\"wp-image-426 lazyload\" data-srcset=\"https:\/\/rayssite.ddns.net\/wp-content\/uploads\/2023\/06\/LTFS-verbose-logging-settings_1.png 423w, https:\/\/rayssite.ddns.net\/wp-content\/uploads\/2023\/06\/LTFS-verbose-logging-settings_1-223x300.png 223w\" data-sizes=\"(max-width: 423px) 100vw, 423px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 423px; --smush-placeholder-aspect-ratio: 423\/570;\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"456\" height=\"645\" data-id=\"427\" data-src=\"https:\/\/rayssite.ddns.net\/wp-content\/uploads\/2023\/06\/LTFS-verbose-logging-settings_2.png\" alt=\"\" class=\"wp-image-427 lazyload\" data-srcset=\"https:\/\/rayssite.ddns.net\/wp-content\/uploads\/2023\/06\/LTFS-verbose-logging-settings_2.png 456w, https:\/\/rayssite.ddns.net\/wp-content\/uploads\/2023\/06\/LTFS-verbose-logging-settings_2-212x300.png 212w\" data-sizes=\"(max-width: 456px) 100vw, 456px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 456px; --smush-placeholder-aspect-ratio: 456\/645;\" \/><\/figure>\n<\/figure>\n\n\n\n<p>Enabling it gave me a detailed log, and here&#8217;s an excerpt of it when writing to the tape:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>727ca0 LTFS20010D SCSI request: &#91; 0A 00 08 00 00 00 ] Requested length=524288\r\nfgetattr&#91;50720816] \/MVI_0019-001.MP4\r\nwrite&#91;50720816] 524288 bytes to 295698432 flags: 0x2\r\n   write&#91;50720816] 524288 bytes to 295698432\r\nfgetattr&#91;50720816] \/MVI_0019-001.MP4\r\nwrite&#91;50720816] 524288 bytes to 296222720 flags: 0x2\r\n   write&#91;50720816] 524288 bytes to 296222720\r\nfgetattr&#91;50720816] \/MVI_0019-001.MP4\r\nwrite&#91;50720816] 524288 bytes to 296747008 flags: 0x2\r\n   write&#91;50720816] 524288 bytes to 296747008\r\nfgetattr&#91;50720816] \/MVI_0019-001.MP4\r\nwrite&#91;50720816] 524288 bytes to 297271296 flags: 0x2\r\n   write&#91;50720816] 524288 bytes to 297271296\r\n727ca0 LTFS20011D SCSI outcome: Driver status=0x00 SCSI status=0x00 Actual length=524288\r\n727ca0 LTFS20039D Backend write: 524288 bytes\r\n727ca0 LTFS20010D SCSI request: &#91; 0A 00 08 00 00 00 ] Requested length=524288\r\nfgetattr&#91;50720816] \/MVI_0019-001.MP4\r\nwrite&#91;50720816] 524288 bytes to 297795584 flags: 0x2\r\n   write&#91;50720816] 524288 bytes to 297795584\r\nfgetattr&#91;50720816] \/MVI_0019-001.MP4\r\nwrite&#91;50720816] 524288 bytes to 298319872 flags: 0x2\r\n   write&#91;50720816] 524288 bytes to 298319872\r\nfgetattr&#91;50720816] \/MVI_0019-001.MP4\r\nwrite&#91;50720816] 524288 bytes to 298844160 flags: 0x2\r\n   write&#91;50720816] 524288 bytes to 298844160\r\nfgetattr&#91;50720816] \/MVI_0019-001.MP4\r\nwrite&#91;50720816] 524288 bytes to 299368448 flags: 0x2\r\n727ca0 LTFS20011D SCSI outcome: Driver status=0x00 SCSI status=0x00 Actual length=524288\r\n   write&#91;50720816] 524288 bytes to 299368448\r\n727ca0 LTFS20039D Backend write: 524288 bytes\r\n727ca0 LTFS20010D SCSI request: &#91; 0A 00 08 00 00 00 ] Requested length=524288<\/code><\/pre>\n\n\n\n<p>And here&#8217;s a log excerpt during a read operation:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>305ec20 LTFS20011D SCSI outcome: Driver status=0x00 SCSI status=0x00 Actual length=524288\r\n   read&#91;50721040] 524288 bytes from 780140544\r\ne71900 LTFS20057D Backend locate: (1, 1497)\r\ne71900 LTFS20010D SCSI request: &#91; 92 00 00 01 00 00 00 00 00 00 05 D9 00 00 00 00 ] Requested length=0\r\nread&#91;50721040] 524288 bytes from 780664832 flags: 0x2\r\ne71900 LTFS20011D SCSI outcome: Driver status=0x00 SCSI status=0x00 Actual length=0\r\ne71900 LTFS20010D SCSI request: &#91; 34 06 00 00 00 00 00 00 00 00 ] Requested length=32\r\ne71900 LTFS20011D SCSI outcome: Driver status=0x00 SCSI status=0x00 Actual length=32\r\ne71900 LTFS20060D Backend ReadPosition: Partition=1, LogObject=1497, FMcount=4\r\ne71900 LTFS20039D Backend read: 524288 bytes\r\ne71900 LTFS20010D SCSI request: &#91; 08 00 08 00 00 00 ] Requested length=524288\r\ne71900 LTFS20011D SCSI outcome: Driver status=0x00 SCSI status=0x00 Actual length=524288\r\n   read&#91;50721040] 524288 bytes from 781189120\r\ne70a60 LTFS20057D Backend locate: (1, 1499)\r\ne70a60 LTFS20010D SCSI request: &#91; 92 00 00 01 00 00 00 00 00 00 05 DB 00 00 00 00 ] Requested length=0\r\nread&#91;50721040] 524288 bytes from 781713408 flags: 0x2\r\ne70a60 LTFS20011D SCSI outcome: Driver status=0x00 SCSI status=0x00 Actual length=0\r\ne70a60 LTFS20010D SCSI request: &#91; 34 06 00 00 00 00 00 00 00 00 ] Requested length=32\r\ne70a60 LTFS20011D SCSI outcome: Driver status=0x00 SCSI status=0x00 Actual length=32\r\ne70a60 LTFS20060D Backend ReadPosition: Partition=1, LogObject=1499, FMcount=4\r\ne70a60 LTFS20039D Backend read: 524288 bytes\r\ne70a60 LTFS20010D SCSI request: &#91; 08 00 08 00 00 00 ] Requested length=524288\r\ne70a60 LTFS20011D SCSI outcome: Driver status=0x00 SCSI status=0x00 Actual length=524288\r\n   read&#91;50721040] 524288 bytes from 782237696\r\ne70c00 LTFS20057D Backend locate: (1, 1496)\r\ne70c00 LTFS20010D SCSI request: &#91; 92 00 00 01 00 00 00 00 00 00 05 D8 00 00 00 00 ] Requested length=0\r\nread&#91;50721040] 524288 bytes from 782761984 flags: 0x2\r\ne70c00 LTFS20011D SCSI outcome: Driver status=0x00 SCSI status=0x00 Actual length=0\r\ne70c00 LTFS20010D SCSI request: &#91; 34 06 00 00 00 00 00 00 00 00 ] Requested length=32\r\ne70c00 LTFS20011D SCSI outcome: Driver status=0x00 SCSI status=0x00 Actual length=32\r\ne70c00 LTFS20060D Backend ReadPosition: Partition=1, LogObject=1496, FMcount=4\r\ne70c00 LTFS20039D Backend read: 524288 bytes\r\ne70c00 LTFS20010D SCSI request: &#91; 08 00 08 00 00 00 ] Requested length=524288\r\ne70c00 LTFS20011D SCSI outcome: Driver status=0x00 SCSI status=0x00 Actual length=524288\r\n   read&#91;50721040] 524288 bytes from 780664832\r\ne70da0 LTFS20057D Backend locate: (1, 1498)\r\ne70da0 LTFS20010D SCSI request: &#91; 92 00 00 01 00 00 00 00 00 00 05 DA 00 00 00 00 ] Requested length=0\r\nread&#91;50721040] 524288 bytes from 783286272 flags: 0x2\r\ne70da0 LTFS20011D SCSI outcome: Driver status=0x00 SCSI status=0x00 Actual length=0\r\ne70da0 LTFS20010D SCSI request: &#91; 34 06 00 00 00 00 00 00 00 00 ] Requested length=32\r\ne70da0 LTFS20011D SCSI outcome: Driver status=0x00 SCSI status=0x00 Actual length=32\r\ne70da0 LTFS20060D Backend ReadPosition: Partition=1, LogObject=1498, FMcount=4\r\ne70da0 LTFS20039D Backend read: 524288 bytes\r\ne70da0 LTFS20010D SCSI request: &#91; 08 00 08 00 00 00 ] Requested length=524288\r\ne70da0 LTFS20011D SCSI outcome: Driver status=0x00 SCSI status=0x00 Actual length=524288\r\n   read&#91;50721040] 524288 bytes from 781713408\r\ne715c0 LTFS20057D Backend locate: (1, 1500)\r\ne715c0 LTFS20010D SCSI request: &#91; 92 00 00 01 00 00 00 00 00 00 05 DC 00 00 00 00 ] Requested length=0\r\nread&#91;50721040] 524288 bytes from 784334848 flags: 0x2\r\ne715c0 LTFS20011D SCSI outcome: Driver status=0x00 SCSI status=0x00 Actual length=0\r\ne715c0 LTFS20010D SCSI request: &#91; 34 06 00 00 00 00 00 00 00 00 ] Requested length=32\r\ne715c0 LTFS20011D SCSI outcome: Driver status=0x00 SCSI status=0x00 Actual length=32\r\ne715c0 LTFS20060D Backend ReadPosition: Partition=1, LogObject=1500, FMcount=4\r\ne715c0 LTFS20039D Backend read: 524288 bytes\r\ne715c0 LTFS20010D SCSI request: &#91; 08 00 08 00 00 00 ] Requested length=524288\r\ne715c0 LTFS20011D SCSI outcome: Driver status=0x00 SCSI status=0x00 Actual length=524288<\/code><\/pre>\n\n\n\n<p>Just by looking at the logs without any knowledge of SCSI commands, I noticed there were a lot more small transfers when reading from the tape compared to when the tape is being written. When writing to the tape, it seems to write blocks sized 512KB one after another, without interruption. To get more information about what&#8217;s happening, I asked ChatGPT (GPT-4 model) to interpret the reading logs:<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"532\" height=\"782\" data-src=\"https:\/\/rayssite.ddns.net\/wp-content\/uploads\/2023\/06\/ChatGPT-LTFS-logs-interpretation.png\" alt=\"\" class=\"wp-image-425 lazyload\" data-srcset=\"https:\/\/rayssite.ddns.net\/wp-content\/uploads\/2023\/06\/ChatGPT-LTFS-logs-interpretation.png 532w, https:\/\/rayssite.ddns.net\/wp-content\/uploads\/2023\/06\/ChatGPT-LTFS-logs-interpretation-204x300.png 204w\" data-sizes=\"(max-width: 532px) 100vw, 532px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 532px; --smush-placeholder-aspect-ratio: 532\/782;\" \/><figcaption>Interpretation of the logs by ChatGPT. Made sense to me!<\/figcaption><\/figure>\n\n\n\n<p>This confirmed my guess that the software was trying to read from different locations rather than sequentially. Of course, LLM like ChatGPT may produce false information that looks plausible, so I took another step to verify this myself: a simple Python script that reads a file, block by block, until the end of the file:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>file_in = \"src_file_on_tape\"\nfile_out = \"dest_file_on_hdd\"\n\nBLOCK_SIZE = 524288\r\n\r\nwith open(file_in, 'rb') as f_in:\r\n\twith open(file_out, 'wb') as f_out:\r\n\t\twhile True:\r\n\t\t\tbuf_bytes = f_in.read(BLOCK_SIZE)\r\n\t\t\tf_out.write(buf_bytes)\r\n\t\t\tif len(buf_bytes) &lt; BLOCK_SIZE:\r\n\t\t\t\tbreak<\/code><\/pre>\n\n\n\n<p>With this, I could see a full 100MB+\/s transfer speed writing to the destination hard drive! This basically confirms the copying tool was causing this slow restoration of files from tape.<\/p>\n\n\n\n<p>I also tried the <code>ltfscopy<\/code> utility from HPE, which is supposed to handle reading from the tape linearly. Strangely, it also has the same problem. Maybe it used some Windows copy API that changed behaviour since Windows 10?<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><span class=\"ez-toc-section\" id=\"The_Solution\"><\/span>The Solution<span class=\"ez-toc-section-end\"><\/span><\/h2>\n\n\n\n<p>Once I had figured out what was wrong, I tried to look for an alternative copy tool that fits my requirements. However, even with help from ChatGPT, I couldn&#8217;t seem to find any options that are guaranteed to read data linearly. I then asked ChatGPT again to iterate on my initial Python script with my feedback. After a few iterations, I arrived at this script that worked well enough:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>import os\r\nimport time\r\nimport glob\r\nimport argparse\r\n\r\n\r\nBLOCK_SIZE = 524288  # Size of blocks to read and write\r\nPRINT_THRESHOLD = 100 * 1024 * 1024  # Print every 100MB\r\n\r\n# Argument parser\r\nparser = argparse.ArgumentParser(description='Copy files matching a pattern from one directory to another.')\r\nparser.add_argument('source_dir', help='Source directory')\r\nparser.add_argument('dest_dir', help='Destination directory')\r\nparser.add_argument('pattern', help='Pattern to match files')\r\nargs = parser.parse_args()\r\n\r\n# Get list of files to copy\r\nfiles_to_copy = glob.glob(os.path.join(args.source_dir, args.pattern))\r\n\r\n\r\nfor file_in in files_to_copy:\r\n    file_out = os.path.join(args.dest_dir, os.path.basename(file_in))\r\n\r\n    # Get total file size\r\n    total_size = os.path.getsize(file_in)\r\n\r\n    with open(file_in, 'rb') as f_in:\r\n        with open(file_out, 'wb') as f_out:\r\n            bytes_copied = 0  # Keep track of bytes copied\r\n            start_time = time.time()  # Start time for speed calculations\r\n            print_counter = 0  # Counter to control when to print\r\n            last_print_time = start_time  # Time of last print for speed calculations\r\n\r\n            while True:\r\n                buf_bytes = f_in.read(BLOCK_SIZE)\r\n                f_out.write(buf_bytes)\r\n                bytes_copied += len(buf_bytes)\r\n                print_counter += len(buf_bytes)\r\n\r\n                if print_counter >= PRINT_THRESHOLD:\r\n                    # Calculate speed\r\n                    current_time = time.time()\r\n                    elapsed_time = current_time - last_print_time\r\n                    speed = PRINT_THRESHOLD \/ elapsed_time \/ (1024*1024)  # Speed in MB\/s\r\n\r\n                    # Calculate progress\r\n                    progress = bytes_copied \/ total_size * 100\r\n\r\n                    print(f\"File: {os.path.basename(file_in)}, Copied: {bytes_copied} bytes, Speed: {speed:.2f} MB\/s, Progress: {progress:.2f}%\\r\")\r\n\r\n                    # Reset print_counter and last_print_time\r\n                    print_counter = 0\r\n                    last_print_time = current_time\r\n\r\n                if len(buf_bytes) &lt; BLOCK_SIZE:\r\n                    break\r\n\r\n    print(f\"Copy {os.path.basename(file_in)} complete.\")\r\n    print(\"Average speed: {:.2f} MB\/s\".format(total_size \/ (time.time() - start_time) \/ (1024*1024)))\r\n<\/code><\/pre>\n\n\n\n<p>This script doesn&#8217;t require any extra libraries or modules. It would simply copy files matching a pattern (specified as a command-line argument) sequentially from one location to another. During copying, for each 100MB of a file, it&#8217;ll display how much has been copied, what&#8217;s the average speed, and the current progress. This works well with my use case, which only sees a few large (100GB+) zipped-up files.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><span class=\"ez-toc-section\" id=\"Conclusion\"><\/span>Conclusion<span class=\"ez-toc-section-end\"><\/span><\/h2>\n\n\n\n<p>Well, there you have it! I hope this can help someone else facing the same problem. One day, if I have the time and motivation, I might put this on GitHub with binary executables to make it more accessible, although maybe at some point in the future, the developers for <code>ltfscopy<\/code> tools will address this problem.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Background A while ago, I got two LTO-6 tape drives for practically free (previous post here). Since then, I&#8217;ve been using them for archival storage of video files. I chose LTFS for its open format and independence of backup software, and I&#8217;ve been using it on my primary PC running Windows 10. To make sure&hellip; <a class=\"more-link\" href=\"https:\/\/rayssite.ddns.net\/index.php\/2023\/06\/28\/troubleshooting-slow-read-speeds-from-ltfs-volumes-on-windows-10\/\">Continue reading <span class=\"screen-reader-text\">Troubleshooting Slow Read Speeds from LTFS Volumes on Windows 10<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[27],"tags":[31,8,28,16],"class_list":["post-400","post","type-post","status-publish","format-standard","hentry","category-troubleshooting","tag-ltfs","tag-lto-6","tag-python","tag-ultrium-6250","entry"],"_links":{"self":[{"href":"https:\/\/rayssite.ddns.net\/index.php\/wp-json\/wp\/v2\/posts\/400","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/rayssite.ddns.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/rayssite.ddns.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/rayssite.ddns.net\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/rayssite.ddns.net\/index.php\/wp-json\/wp\/v2\/comments?post=400"}],"version-history":[{"count":19,"href":"https:\/\/rayssite.ddns.net\/index.php\/wp-json\/wp\/v2\/posts\/400\/revisions"}],"predecessor-version":[{"id":429,"href":"https:\/\/rayssite.ddns.net\/index.php\/wp-json\/wp\/v2\/posts\/400\/revisions\/429"}],"wp:attachment":[{"href":"https:\/\/rayssite.ddns.net\/index.php\/wp-json\/wp\/v2\/media?parent=400"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/rayssite.ddns.net\/index.php\/wp-json\/wp\/v2\/categories?post=400"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/rayssite.ddns.net\/index.php\/wp-json\/wp\/v2\/tags?post=400"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}