I just learned #Rust by creating a bunch of CLI tools, and it was amazing! (#cli #rustlang #dev #devlog #programming)

Unleashing the Power of Flags in C# and ReactJS

Overview

Since everyone says Rust is so fast and powerful, I always wanted to learn it. But I lacked the motivation to do so… Until I noticed I already had a bunch of CLI tools, and they were all written in different languages, and in different repositories.

This was when I decided to learn Rust by rewriting all of my CLI tools in Rust, including creating clones of simple tools like cat, and touch. Which I like, but they don’t exit on Windows.

At the beginning, I was a bit overwhelmed by the Rust borrow checker, now that some time have passed, It feels like the beginning. 🤣 If you’re used to languages like C# or Python, the borrow checker can be a bit of a hurdle, but after banging your head on your keyboard for a while, it starts to make sense.

When I started, I was using AI a lot more, and I noticed that the more I used, the less I learned. So I decided to use it differently. Instead of asking AI to bail me out and fix the code, I started asking it to tell me what was wrong, why, and what was the most idiomatic way to do it in Rust.

That’s when I started to understand Rust properly, and the borrow checker started to make sense. I kept doing this until all the tools were done, then I started to ask AI to analyze the tools, find bugs, and suggest improvements: but not to change the code for me.

This gave me a lot of insight, and I basically re-created everything again.

One thing I shamelessly used AI for was to create the summary for the methods. Basically all the documentation was created by AI, and I’m not fully ashamed of it. 😂

Are all the tools perfect? No, but they are a lot better than they were before (on my first try), and I learned a lot in the process.

I think it’s noteworthy to mention: Your mileage with AI may vary. This was my experience, and it worked for me.

Was learning Rust hard? As someone that already knows about programming, and is used to languages like C#, Python, Goland, and TypeScript, I would say… no. It wasn’t hard, but it has a steep learning curve, until you get used to the borrow checker. After that, it becomes a lot easier.

So, after all that, what do we have?

  1. A tool to read the public info on JWT tokens;
  2. A high-performance tool to read messages from EventHub,
  3. and another to export the messages;
  4. A CSV data normalizer tool;
  5. A tool that splits large files (including CSV) into smaller ones;
  6. A tool that searches for multiple terms inside a text file and creates one output file per search term;
  7. A tool that mimics the cat command from Unix (useful on Windows);
  8. A tool that mimics the touch command from Unix (also useful on Windows);
  9. A tool that generates GUID (uuidv4) in the terminal with some nice options;
  10. A tool that converts unix timestamp to readable format and vice versa;

Mimics the classic Unix cat command. It concatenates files and displays them with optional line numbering, character visualization, and formatting features.

Example:

# Show a file with line numbers and visible tabs/line endings
cat -nA config.txt
bash

Output:

     1	server_host=localhost^I# Main server$
     2	port=8080$
     3	$
     4	debug=true$

I hate dealing with CSV files with data missing and having to write a script (or search for something) to fix it, so I created this tool: A CSV data Normalizer. This tool fills in empty fields with default values you specify, making your data clean and consistent.

Example:

# Fill missing names with "Unknown" and missing ages with "0"
csvn --file messy_data.csv --value-map "name=Unknown" --value-map "age=0"
bash

EventHub Reader - connects to Azure EventHub, reads messages, and stores them locally with checkpoint/resume support. This is your gateway to capturing streaming data for later analysis. Performs reasonably well, and during my tests were able to read about 380 messages/second (it could probably be faster in better filesystems than NTFS).

Example:

# Read from all partitions and export filtered messages to files
eh_read --connection-string "Endpoint=sb://..." --entity-path "events" --read-to-file --dump-filter "ERROR"
bash

One of the features of the aforementioned EventHub Reader is the ability to read messages from Eventhub and save them to a local embedded database. After that, you need a way to export those messages from the DB to files. Enter EventHub Export tool! It exports messages from local databases (created by eh_read) to various file formats.

You don’t need to use this tool, since you can read the messages directly to files, but with this, you can read at max speed and then export the messages.

Cool example:

# Export all temperature sensor messages to JSON with metadata
eh_export --config export-config.json --export-format json --dump-filter "temperature" --include-metadata
bash

High-performance text search (case insensitive) utility that extracts lines containing specific patterns. It’s like grep but with some neat features like separate output files per search term and parallel processing.

Example:

# Search for errors and warnings, output to separate files with 4 workers
get_lines --file server.log --search "error,warning,critical" --output results --workers 4
bash

Creates results/error.txt, results/warning.txt, and results/critical.txt files automatically.

GUID generator with some extra features that I find useful, like continuous generation at intervals and clipboard integration or automatically copying the GUID to the clipboard.

Example:

# Generate a new GUID every 2 seconds (great for testing)
guid --continuous-generation 2.0
bash

Output:

🚦 Press Ctrl+C to stop...
550e8400-e29b-41d4-a716-446655440000
plaintext

(The GUID values will be printed over and over on the same line.

JWT decoder that extracts and displays token contents without signature verification. Handy for debugging authentication issues and understanding what’s in your tokens.

Example:

# Decode a JWT and copy the user ID to clipboard
jwt --copy-to-clipboard client_id "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9..."
bash

Instantly see what’s inside those mysterious JWT tokens, and copies the value of client_id claim to the clipboard, if it exists.

File splitter that handles both regular text files and CSV files with header preservation.

Example:

# Split a large CSV into 1000-line chunks, keeping headers in each file
split --file huge_dataset.csv --csv-mode --lines-per-file 1000
bash

Each output file gets the original headers: no more manual header management!

My implementation of the Unix touch command for updating file timestamps. Creates files if they don’t exist and handles various timestamp formats.

Cool example:

# Set specific timestamp on multiple files
touch -d "2024-12-25 15:30:00" holiday_file1.txt holiday_file2.txt
bash

This tool is a simple bidirectional timestamp converter, that converts between Unix timestamps and human-readable dates automatically. It detects what you give it and converts to the other format. Not a perfect port of the Unix tool date, but it works similarly. NGL, I created this because I usually want to know what the value of _ts field (from Azure Cosmos Db) mean. Now it’s easy and fast.

Cool example:

# Convert Unix timestamp to readable date
ts 1703764800
bash

Output:

🚀 Timestamp Converter v1.0.0
================================================
🔢 Input: 1703764800

UTC Time: 2023-12-28T12:00:00Z
Local Time: 2023-12-28T13:00:00+0100

Here’s the link to the repository: https://github.com/brenordv/rusted-toolbox

If you’re not sure about learning Rust, I hope this post helps you to decide to do so. :)

Translations: