ariya.io
https://ariya.io/
Recent content on ariya.ioHugo -- gohugo.ioenMon, 31 Mar 2025 22:47:17 -0800Not Everything is an Agent
https://ariya.io/2025/03/not-everything-is-an-agent
Mon, 31 Mar 2025 22:47:17 -0800https://ariya.io/2025/03/not-everything-is-an-agent<p>“Agent” is likely going to be the word that will cause existential dread to true LLM enthusiasts.</p>
<p>Everyone’s got a different idea of what it means. In our modern age of innovation theater, lots of organizations gleefully slap the “agentic” label on anything that vaguely resembles a regular program (and pocket tons of money). Even a simple HTTP call to an LLM-as-a-Service can be called an agent, if you try desperately hard enough.</p>
<p>The internet, as always, is flooded with “groundbreaking” tutorials on building these so-called agents. Often authored by the latest <em>hypefluencers</em>, they typically involve a few lines (probably generated by whatever coding assistant is currently trending on Hacker News) that compose LangChain and an Ollama instance, often being presented as the pinnacle of AI autonomy. Because why bother with actual innovation when you can just repeat the quasi-boilerplate code <em>ad nauseam</em>?</p>
<p>That’s why I liked it <strong>a lot</strong> when the Anthropic article, <a href="https://www.anthropic.com/engineering/building-effective-agents">Building effective agents</a>, came out, as it dares to suggest that simply bolting on retrieval or memory to an LLM does <em>not</em>, in fact, make an agent. And chaining or routing? That’s just glorified <em>control flow</em>, folks. Only when an LLM is tasked with truly complex, real-world tasks such as coding or using a computer, does it begin to resemble the autonomous agent we’ve been promised</p>
<p>So how do you identify a real agent? Don’t be fooled by the grand pronouncements of those rearranging deck chairs on the Titanic. Ask for the receipts of successful evaluations! Anecdotal evidence of a few successful LLM calls isn’t that useful. Remember, in the world of LLMs, as in life, the loudest claims are often the emptiest!</p>
Afterburner and Power Limit
https://ariya.io/2025/02/afterburner-and-power-limit
Fri, 28 Feb 2025 21:37:19 -0800https://ariya.io/2025/02/afterburner-and-power-limit<p>Ever witnessed a fighter jet spewing hot flames as it kicks into afterburner? In that moment, efficiency is deliberately sacrificed for maximum acceleration.</p>
<p>In the midst of combat, efficiency means nothing when your life is on the line. The jet engine must keep roaring, before the pilot gets taken down by the enemy (and potentially meets their maker).</p>
<p>A GPU faces a similar fate. When pushed to consume hundreds of watts to churn out LLM tokens at the user’s breakneck speed, there’s no choice but to run as fast as possible, even if sweat is pouring and muscle fatigue reaches its peak.</p>
<p>Fortunately, <code>nvidia-smi</code>, with its <code>pl</code> (<em>power limit</em>) option, can be used to set an upper limit on power consumption, so the GPU doesn’t go completely overboard. Those last few dozen watts often don’t make a significant difference in performance, but they definitely contribute to heat generation, which needs to be monitored.</p>
<p><img src="https://ariya.io/images/2025/02/powerlimit.png" alt="Power Limit" /></p>
<p>From the graph (measured with <code>llama-bench</code>, for the Mistral-7B-Instruct model, Q4), it’s evident that pushing the power further doesn’t lead to increased LLM speed. 250, 300, or even 350 watts, it’s more or less the same. Meanwhile, dropping to 200 watts does slightly decrease the speed, but it’s very worthwhile considering the power consumption is reduced by a third.</p>
<p>Saving energy is always a wise choice!</p>
Privacy-Preserving Personal Search Appliance
https://ariya.io/2025/01/privacy-preserving-personal-search-appliance
Thu, 30 Jan 2025 20:27:09 -0800https://ariya.io/2025/01/privacy-preserving-personal-search-appliance<p>This is powered by SearXNG, an excellent open-source meta search engine.</p>
<p>Unlike traditional search engines like Google or Bing, <a href="https://github.com/searxng/searxng">SearXNG</a> doesn’t crawl the web and index content. Instead, it leverages other search engines like DuckDuckGo, Qwant, and Mojeek to fetch results while protecting your privacy. This means your personal information isn’t tracked by those upstream services.</p>
<p>With the rise of LLMs and RAG, SearXNG has gained even more popularity. But I’ll dive into that in a future post.</p>
<p>Setting up SearXNG is a breeze. You can use Docker or <a href="https://podman.io">Podman</a> (my favorite Docker replacement, everyone should use it!) to get it running quickly. In fact, I encourage you to try it on your main machine. You’ll be surprised how easy it is!</p>
<p>A little fun fact about the name. SearXNG is an active fork of SearX. As typically the case, NG usually stands for “next generation”. However, the X in SearX is actually the Greek letter <em>chi</em>, which is often transliterated as “ch”. So, you could think of SearXNG as “searching”.</p>
<p>While you can run SearXNG on your main machine, a dedicated device offers several advantages. You can share it with your family or colleagues and add extra security layers like Tailscale, Wireguard, or the good old OpenVPN.</p>
<p><img src="https://ariya.io/images/2025/01/searxng.jpg" alt="SearXNG Appliance" /></p>
<p>For my little box, I chose a used Shuttle DH170 with an Intel Core i3 6100 (2 cores, 4 threads) and 16GB of RAM. This might seem like overkill, but it’s more than enough for SearXNG. The 200GB SSD is also plenty of storage. The total cost of the hardware was just $70. I could have saved more by using less RAM and storage, but I had these components on hand.</p>
<p>In terms of power consumption, the appliance idles at around 10W. I haven’t optimized it yet using tools like PowerTOP, but even so, it’s quite efficient. The off-the-shelf x86 architecture offers excellent upgrade potential. I could easily swap in a more powerful CPU like an Intel Core i7 6700 (4 cores, 8 threads) or add more RAM and storage if needed.</p>
<p>Initially, I considered using <a href="https://www.proxmox.com">Proxmox</a> to manage the system and run SearXNG in a container. However, I found this to be too complex. Instead, I opted for a simpler approach using vanilla Debian and Podman.Then, I remembered that there is project, <a href="https://casaos.io">CasaOS</a>, a user-friendly home server OS. It offers a web-based interface for remote management and can easily run SearXNG. If you’re new to home servers, CasaOS is a great way to get started.</p>
<p>For those who prefer a more resource-constrained solution, you could use a Raspberry Pi or similar device. CasaOS also works well on ARM-based systems.</p>
<p>In today’s digital age, privacy is a fundamental right. Unfortunately, our digital footprints are being exploited, and our personal data is being harvested. Rampant privacy violations are becoming the norm. Let’s take proactive steps to protect ourselves and our loved ones!</p>
LLM Inference Machine for $300
https://ariya.io/2024/12/llm-inference-machine-for-300
Fri, 27 Dec 2024 20:17:14 -0800https://ariya.io/2024/12/llm-inference-machine-for-300<p>You can absolutely run <a href="https://qwenlm.github.io">Qwen-2.5 32B</a>. And of course, <a href="https://ai.meta.com/blog/meta-llama-3-1/">Llama-3.1 8B</a> and <a href="https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices/">Llama-3.2 Vision 11B</a> are no problem at all.</p>
<p>Now, before you get too excited, there’s a catch: this rig won’t break any speed records (more on that later). But if you’re after a budget-friendly way to do LLM research, this build might be just what you need.</p>
<p>Here’s a breakdown of the parts and the amazing prices I got them for:</p>
<ul>
<li>AMD Ryzen 5 3400G: $50</li>
<li>Gigabyte X570 motherboard: $30</li>
<li>16 GB DDR4-3200 RAM: $30</li>
<li>512 GB SSD: $20</li>
<li>NVIDIA Tesla M40: $100</li>
<li>Cooler for M40: $30</li>
<li>EVGA 750W PSU: $20</li>
<li>Silverstone HTPC case: $20</li>
</ul>
<p>The motherboard was a crazy good find, a broken PCIe latch got me a killer deal. The Ryzen 3400G is outdated by today’s standards, with only 4 cores and 8 threads, but for a GPU-focused inference rig, it’s more than enough. Bonus: its Vega iGPU frees up the PCIe slot for the real star of the show, the <a href="https://www.techpowerup.com/gpu-specs/tesla-m40.c2771">M40 GPU</a>.</p>
<p>Speaking of the GPU, it’s a Maxwell-era data center card with a massive 24GB of VRAM. That much memory is essential for running hefty 32B models (quantized, of course).</p>
<p>While you can find a used M40 on eBay for around $90 these days, I had to buy an additional cooling solution (two small fans in a 3D-printed shroud), since data center GPUs usually don’t come with coolers or blowers like their consumer counterparts.</p>
<p>Here are the token generation speeds for several instruction-tuned models, quantized to 4-bit (Q4_K_M), measured with <a href="https://github.com/ggerganov/llama.cpp/tree/master/examples/llama-bench">llama-bench</a>:</p>
<ul>
<li>Phi-3.5 Mini: 47 tok/s</li>
<li>Mistral 7B: 30 tok/s</li>
<li>Llama-3.1 8B: 28 tok/s</li>
<li>Mistral Nemo 12B: 19 tok/s</li>
<li>Qwen-2.5 Coder 32B: 7 tok/s</li>
</ul>
<p><img src="https://ariya.io/images/2024/12/local-llm-machine.jpg" alt="Local LLM Machine" /></p>
<p>Performance is all relative. Compared to the latest RTX 3000 series, the M40 is definitely the slower sibling: about 5x slower, to be exact. But then again, an RTX 3090 is roughly 10x more expensive. Meanwhile, a more affordable RTX 3080 might limit your options with its 10GB (or 12GB for the enthusiast version) of VRAM.</p>
<p>An RTX 2080 Ti with 11GB VRAM could be a nice upgrade. Prices in the used market are dropping ($250 or less at the time of writing), and it delivers a solid 3x speed boost compared to the M40. Double the cost for triple the speed? That’s a pretty sweet deal!</p>
<p>How about Apple Silicon? The M2 Pro with its Metal GPU is roughly 25% faster than the M40. It wins easily in areas like portability, efficiency, and noise levels, but it comes with a significantly higher cost.</p>
<p>Coding assistance is a proven home-run use case for powerful LLMs. This is where the M40’s massive 24GB VRAM shines, enabling you to run the fantastic <a href="https://qwenlm.github.io">Qwen-2.5 Coder 32B model</a>. Pair it with <a href="https://www.continue.dev/">Continue.dev</a> as your coding assistant, and you’ve got a powerful combo that could replace tools like <a href="https://github.com/features/copilot">GitHub Copilot</a> or <a href="https://codeium.com">Codeium</a>, particularly for medium-complexity projects.</p>
<p>The best part? Privacy and data security. With local LLM inference, your precious source code stays on your machine.</p>
<p>Now, should I go all in? Is it time to add a second M40?</p>
Deploying an Uberjar to Dokku
https://ariya.io/2023/02/deploying-an-uberjar-to-dokku
Thu, 23 Feb 2023 11:37:36 -0800https://ariya.io/2023/02/deploying-an-uberjar-to-dokku<p><a href="https://dokku.com">Dokku</a> is a self-hosted Platform-as-a-Service (PaaS) that offers a compelling alternative to popular PaaS solutions like Heroku. With built-in support for Linux containers, deploying an application on Dokku is straightforward. However, there is a lesser-known deployment method that involves sending a build artifact, such as a JAR package for Java apps, directly to Dokku.</p>
<p><a href="https://dokku.com/docs/development/plugin-triggers/#git-from-archive">This deployment method</a> is useful when there is a need to quickly and frequently deploy the latest version of a custom application. By skipping the process of creating a container image, developers can focus on building the artifact for local development. This approach can be applied to packaged applications built with various programming languages, including Python, Java, JavaScript, PHP, etc.</p>
<p>To follow this process, it is necessary to have the packaged Java application in the form of <a href="https://stackoverflow.com/q/11947037">an Uberjar</a>, i.e. a JAR archive that contains all dependencies and can be executed by the JVM without requiring additional packages at runtime. The process assumes that Dokku has been installed on a machine named <code>dokku.homelab.lan</code>, and the <code>dokku</code> command is working properly:</p>
<pre><code>$ dokku version
dokku version 0.29.4
</code></pre>
<p>Also, due to some heavy building that is going to happen on that Dokku machine, make sure there is an ample free capacity on the disk, ideally 8 GB or more (depending on the application).</p>
<p>If we are deploying an Uberjar, obviously that Uberjar needs to exist first. For this example, I am using an Uberjar from the open-source edition of <a href="https://metabase.com">Metabase</a> (adjust things to suit your need). Note that Metabase is written in <a href="https://clojure.org">Clojure</a>, not Java, though it runs on JVM. In theory, any other JVM languages (e.g. Kotlin, Scala, etc) can work as well.</p>
<pre><code>$ curl -OL https://downloads.metabase.com/v0.45.2/metabase.jar
$ file ./metabase.jar
./metabase.jar: Zip archive data, at least v1.0 to extract, compression method=store
</code></pre>
<p>Two auxiliary files, a <code>Procfile</code> and a <code>Dockerfile</code>, are required. The <code>Procfile</code> contains a single line of code that specifies how the application is executed, while the <code>Dockerfile</code> details the construction of the container.</p>
<p>The first file, <code>Procfile</code>, is this one-liner:</p>
<pre><code>web: java -jar metabase.jar
</code></pre>
<p>The second file, <code>Dockerfile</code>, is not too strange to those who are familiar with Docker:</p>
<pre><code>FROM eclipse-temurin:17
WORKDIR /app
COPY . ./
RUN java -version
RUN ls -l /app
EXPOSE 3000
</code></pre>
<p>The base image is set to the latest Long-Term Support (LTS) version of OpenJDK v17 using the Eclipse Temurin distribution from <a href="https://adoptium.net">Adoptium</a>. The <code>EXPOSE</code> line indicates the port Metabase uses, which is port 3000. The two optional <code>RUN</code> lines are useful for debugging or resolving any issues that may arise.</p>
<p>Next, we package the necessary files into a tarball by executing the following commands:</p>
<pre><code>$ tar cvf package.tar Procfile Dockerfile metabase.jar
$ file ./package.tar
./package.tar: POSIX tar archive (GNU)
</code></pre>
<p>Before sending the tarball to Dokku, we must create an application. This is achieved by executing the following commands:</p>
<pre><code>$ dokku apps:create metabase
-----> Creating metabase...
-----> Creating new app virtual host file...
$ dokku proxy:ports-set metabase http:80:3000
dokku proxy:ports-set metabase http:80:3000
-----> Setting config vars
DOKKU_PROXY_PORT_MAP: http:80:3000
</code></pre>
<p>The last command maps the host’s port 80 to the container’s exposed port 3000. And now, the fun starts!</p>
<pre><code>$ cat package.tar | dokku git:from-archive metabase --
-----> Fetching tar file from stdin
-----> Generating build context
Striping 0 worth of directories from tarball
Moving unarchived files and folders into place
-----> Updating git repository with specified build context
-----> Cleaning up...
-----> Building metabase from Dockerfile
-----> Setting config vars
DOKKU_DOCKERFILE_PORTS: 3000
Sending build context to Docker daemon 271.7MB
Step 1/12 : FROM eclipse-temurin:17
Digest: sha256:f6562feb32844d0059616d6e54c6cc3127ccf77fb594ccb98cc4279ca15887ed
Status: Downloaded newer image for eclipse-temurin:17
---> 1e117025f42d
Step 2/12 : WORKDIR /app
---> Running in 89d26eed69f3
---> db157924a857
Step 3/12 : COPY . ./
---> 59e836261c66
Step 4/12 : RUN java -version
---> Running in 21df4266e534
openjdk version "17.0.6" 2023-01-17
OpenJDK Runtime Environment Temurin-17.0.6+10 (build 17.0.6+10)
OpenJDK 64-Bit Server VM Temurin-17.0.6+10 (build 17.0.6+10, mixed mode, sharing)
---> 16d451db8f1a
Step 5/12 : RUN ls -l /app
---> Running in 829a2df4f10e
total 265328
-rw-r--r-- 1 root root 92 Jan 31 02:57 Dockerfile
-rw-r--r-- 1 root root 271686194 Jan 31 02:57 metabase.jar
-rw-r--r-- 1 root root 28 Jan 31 02:57 Procfile
---> 67b7b1179da7
Step 6/12 : EXPOSE 3000
Step 7/12 : LABEL com.dokku.app-name=metabase
Step 8/12 : LABEL com.dokku.builder-type=dockerfile
Step 9/12 : LABEL com.dokku.image-stage=build
Step 10/12 : LABEL dokku=
Step 11/12 : LABEL org.label-schema.schema-version=1.0
Step 12/12 : LABEL org.label-schema.vendor=dokku
Successfully built a246db231b6f
Successfully tagged dokku/metabase:latest
-----> Releasing metabase...
-----> Checking for predeploy task
No predeploy task found, skipping
-----> Checking for release task
No release task found, skipping
-----> Checking for first deploy postdeploy task
No first deploy postdeploy task found, skipping
-----> Deploying metabase via the docker-local scheduler...
-----> Configuring metabase.dokku.homelab.lan...(using built-in template)
-----> Creating http nginx.conf
Reloading nginx
=====> Application deployed:
http://metabase.dokku.homelab.lan
</code></pre>
<p>The log may appear lengthy, but the steps it displays should be straightforward. On the Dokku target machine, a container image is constructed using the information in the tarball. From that image, a container is created and deployed using the standard Dokku machinery. If all goes well, the application (Metabase in this instance) will be up and running at the specified hostname.</p>
<p>As you become more familiar with this method, sending build artifacts to Dokku after each change will become a natural part of your workflow!</p>
Continuous Integration for React Native Apps with GitHub Actions
https://ariya.io/2020/12/continuous-integration-for-react-native-apps-with-github-actions
Tue, 29 Dec 2020 18:03:18 -0800https://ariya.io/2020/12/continuous-integration-for-react-native-apps-with-github-actions<p>For React Native mobile apps targeting Android and iOS, an easy way to setup its continuous integration is to take advantage of Actions, an automation workflow service provided by GitHub. Even better, for open-source projects, GitHub Action offers unlimited free running minutes (at the time of this writing).</p>
<p>The advantage of <a href="https://reactnative.dev/">React Native</a> is a single code-base targeting two major mobile platforms, iOS and Android. However, care must be taken so that when one developer focuses on implementing features on fixing defects on Android, whatever they check in into the code will not break iOS and vice versa. Ideally, that developer should always check and verify for both platforms. But mistakes happen and the best way to catch them is to ensure that the corresponding continuous integration (CI) is running smoothly to catch those potential problems early on.</p>
<p>Thanks to <a href="https://docs.github.com/en/free-pro-team@latest/actions">GitHub Actions</a> supporting running the <a href="https://docs.github.com/en/free-pro-team@latest/actions/reference/context-and-expression-syntax-for-github-actions">workflow</a> on macOS and Linux (also actually Windows, but that is not too relevant for this purpose), creating a CI for React Native is easy enough. To follow along, check the sample project (in the style of Hello world) that I have created at <a href="https://github.com/ariya/hello-react-native">github.com/ariya/hello-react-native</a>.</p>
<p>Let us start with the Android build since it is the easiest. Create a file with the name <code>android.yml</code> under the directory <code>.github/workflows</code>. The content should be like this:</p>
<pre><code>name: Android
on: [push, pull_request]
jobs:
build:
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v2
- name: Use Node.js v12
uses: actions/setup-node@v1
with:
node-version: 12.x
- run: npm ci
- run: ./gradlew assembleDebug -Dorg.gradle.logging.level=info
working-directory: android
name: Build Android apk (debug)
</code></pre>
<p>The above YAML declares that this workflow must be executed for every pull request and also once it is merged, as well as when a commit is pushed into the source repo. The workflow runs on a Ubuntu 20.04 machine which is, thanks to GitHub, <a href="https://docs.github.com/en/free-pro-team@latest/actions/reference/specifications-for-github-hosted-runners">already equipped</a> with some development packages, including Java, Android SDK, and many other bits and pieces necessary for Android development. The first step is to check out the code (obvious) followed by another step to pick the <a href="https://nodejs.org/">Node.js</a> version (12 in this case, feel free to adjust it to your project). The step <code>npm ci</code> will install all the dependencies. The next step after that is invoking <a href="https://gradle.org/">Gradle</a> to build the app, just as it is being done for a local development machine.</p>
<p>Once this file is ready, commit it to the repo, push the branch, and voila! GitHub will start to execute that build process for any future branch push and also for all pull requests (for this simple demo project, the build process takes about 3 minutes or less, not bad at all!). If the pull request does not break the Android build, we will see the usual green checkmark, as illustrated below. Of course, if the build breaks, the failure will be displayed and we can track the build log to find out what has gone wrong (this helps to accelerate the troubleshooting).</p>
<p><img src="https://ariya.io/images/2020/12/rn-pr.png" alt="Pull request" /></p>
<p>For completeness, we can also have the <a href="https://docs.github.com/en/free-pro-team@latest/actions/guides/storing-workflow-data-as-artifacts">build artifact</a>, the APK file generated by Gradle, archived for every workflow run. To do that, add the following lines:</p>
<pre><code> - uses: actions/upload-artifact@v2
with:
name: android-apk
path: '**/*.apk'
</code></pre>
<p>Clicking on the green mark icon on the commit view will lead to the detailed result of Action workflows for that particular commit. We can also find the link to the archived artifact, in this case the APK files. Since the artifacts are <a href="https://docs.github.com/en/free-pro-team@latest/github/administering-a-repository/configuring-the-retention-period-for-github-actions-artifacts-and-logs-in-your-repository">retained for some time</a> (based on the project settings, default to 30 days), this can be very handy when we want to troubleshoot a problem. Let us say a certain feature does not work anymore with today’s build but we are confident that the same feature still worked with the build from last week. Rather than checkout out different revisions and rebuild the app, we can just grab the APK files. Since these are built in debug mode, we can comfortably launch it in an emulator and debug it just like an APK that we build on a local development environment.</p>
<p><img src="https://ariya.io/images/2020/12/rn-artifact.png" alt="Artifact" /></p>
<p>How about building for iOS? It is not exactly the same but it follow the same principles. Here is a minimalistic workflow file, <code>ios.yml</code>, as a starting point:</p>
<pre><code>name: iOS
on: [push, pull_request]
jobs:
build:
runs-on: macos-latest
steps:
- uses: actions/checkout@v2
- name: Use Node.js
uses: actions/setup-node@v1
with:
node-version: 14.x
- run: npm ci
- run: xcode-select -p
- run: pod install
working-directory: ios
name: Install pod dependencies
- name: Build iOS (debug)
run: "xcodebuild \
-workspace ios/HelloReactNative.xcworkspace \
-scheme HelloReactNative \
clean archive \
-sdk iphoneos \
-configuration Debug \
-UseModernBuildSystem=NO \
-archivePath $PWD/HelloReactNative \
CODE_SIGNING_ALLOWED=NO"
</code></pre>
<p>The first few lines are just like the Android workflow. The major difference here is that the workflow needs to run on a macOS machine, as iOS SDK isn’t available on neither Linux nor Windows. The build steps follow a similar pattern: check out the code, setup Node.js, and install dependencies. There are two extra steps. The first one is to run <code>xcode-select -p</code> to ensure the readiness of the correct Xcode and its related tool. The second one, <code>pod install</code>, is used to install any dependencies for <a href="https://cocoapods.org/">CocoaPods</a>, assuming that the project is using CocoaPods to manage iOS-specific dependencies (usually it is). After that, we invoke the command-line debug build with Xcode, just like what we would do on a local machine. Since building for iOS is a bit more complicated, for this simple demo project, it will run for around 10 minutes, give or take.</p>
<p>Note that the above YAML files cover the build for iOS and Android. Please do not forget to create another workflow file that runs the tests, typically with Jest, to catch potential regression on the unit tests and/or integration tests. Often times, this workflow is also the best place to run various static and dynamic code analyzers (linters, code formatter, security scanners, etc).</p>
<p>Armed with three workflow YAML files, we fully established a simple and yet powerful continuous integration for React Native apps. Happy developing!</p>
On GitHub Actions with MSYS2
https://ariya.io/2020/07/on-github-actions-with-msys2
Fri, 31 Jul 2020 20:33:31 -0700https://ariya.io/2020/07/on-github-actions-with-msys2<p>Thanks to the complete GitHub Actions for MSYS2, it is easier than ever to construct a continuous integration setup for building with compilers and toolchains which can run on MSYS2.</p>
<p>The details are available on the official page, <a href="https://github.com/marketplace/actions/setup-msys2">github.com/marketplace/actions/setup-msys2</a>. However, perhaps it is best illustrated with a simple but concrete example. As usual, for this illustration, you will see the use of this simplistic Hello, world program in ANSI C. To follow along, check out its repository at <a href="https://github.com/ariya/hello-c90">github.com/ariya/hello-c90</a>.</p>
<p>Let us take a look at this workflow setup to build this C program with GCC on <a href="https://www.msys2.org/">MSYS2</a> (on Windows, obviously):</p>
<pre><code>name: amd64_windows_gcc
jobs:
amd64_windows_gcc:
runs-on: windows-2019
defaults:
run:
shell: msys2 {0}
steps:
- uses: actions/checkout@v2
- uses: msys2/setup-msys2@v2
with:
install: gcc make
- run: gcc -v
- run: make CC=gcc
- run: file ./hello.exe
- run: ./hello
</code></pre>
<p>The important lines are for the <code>setup-msys2</code> section. The <code>install</code> value allows an easy selection of various <a href="https://packages.msys2.org/search">packages</a> which shall be installed before proceeding to the next step. For this purpose, it is sufficient to install <code>gcc</code> and <code>make</code>, but YMMV.</p>
<p>The rest is self-explanatory. Please note also the <code>defaults</code> section earlier, this is convenient to set the default shell so that we do not need to explicitly call it for every single <code>run</code> thereafter.</p>
<p><img src="https://ariya.io/images/2020/07/msys2.png" width="642" height="310"></p>
<p>Now let us come up with another variant, this time for <a href="https://clang.llvm.org/">Clang</a> instead of GCC (read also my previous post: <a href="https://ariya.io/2020/01/clang-on-windows/">Clang for Windows</a>)</p>
<pre><code>name: amd64_windows_clang
jobs:
amd64_windows_clang:
runs-on: windows-2019
defaults:
run:
shell: msys2 {0}
steps:
- uses: actions/checkout@v2
- uses: msys2/setup-msys2@v2
with:
install: make mingw-w64-x86_64-clang
- run: clang --version
- run: make CC=clang
- run: file ./hello.exe
- run: ./hello
</code></pre>
<p>Pretty straightforward, isn’t it? We just change the package to be installed and the compiler to be used. Since the two YAML files are very similar, to avoid a lot of repeated steps, we can parametrize it as follows. This is basically taking advantage of the <a href="https://docs.github.com/en/actions/configuring-and-managing-workflows/configuring-a-workflow#configuring-a-build-matrix">matrix strategy feature</a> of GitHub Actions.</p>
<pre><code> amd64_windows:
runs-on: windows-2019
strategy:
matrix:
compiler: [gcc, clang]
defaults:
run:
shell: msys2 {0}
steps:
- uses: actions/checkout@v2
- uses: msys2/setup-msys2@v2
- run: pacman --noconfirm -S make gcc
if: ${{ matrix.compiler == 'gcc' }}
- run: pacman --noconfirm -S make mingw-w64-x86_64-clang
if: ${{ matrix.compiler == 'clang' }}
- run: ${{ matrix.compiler }} --version
- run: make CC=${{ matrix.compiler }}
- run: file ./hello.exe
- run: ./hello
</code></pre>
<p>To take it one step further, we can also support both i686 and AMD64 platforms in the same YML, again by parametrizing the architecture. Here is how it looks:</p>
<pre><code>name: windows
jobs:
windows:
runs-on: windows-2019
strategy:
matrix:
compiler: [gcc, clang]
msystem: [MINGW32, MINGW64]
defaults:
run:
shell: msys2 {0}
steps:
- uses: actions/checkout@v2
- uses: msys2/setup-msys2@v2
with:
msystem: ${{ matrix.msystem }}
install: make
- run: pacman --noconfirm -S gcc
if: ${{ matrix.compiler == 'gcc' }}
- run: pacman --noconfirm -S mingw-w64-x86_64-clang
if: ${{ (matrix.msystem == 'MINGW64') && (matrix.compiler == 'clang') }}
- run: pacman --noconfirm -S mingw-w64-i686-clang
if: ${{ (matrix.msystem == 'MINGW32') && (matrix.compiler == 'clang') }}
- run: ${{ matrix.compiler }} --version
- run: make CC=${{ matrix.compiler }}
- run: file ./hello.exe
- run: ./hello
</code></pre>
<p>That’s all 4 combinations, 32-bit and 64-bit, for each GCC and Clang, in a simple configuration!</p>
Cross-compiling with musl Toolchains
https://ariya.io/2020/06/cross-compiling-with-musl-toolchains
Mon, 22 Jun 2020 05:37:59 -0700https://ariya.io/2020/06/cross-compiling-with-musl-toolchains<p>When working on command-line utilities which can be useful for various platforms, from Windows on x86 to Linux on MIPS, the existence of a cross-compilation is highly attractive. A number of different binaries can be constructed conveniently from a single, typically powerful host system.</p>
<p><a href="https://alpinelinux.org">Alpine Linux</a> popularizes the use of <a href="https://musl.libc.org">musl</a> a no-frills C standard library for Linux. According to its website:</p>
<blockquote>
<p>musl is lightweight, fast, simple, free, and strives to be correct in the sense of standards-conformance and safety.</p>
</blockquote>
<p>In addition, thanks to <a href="https://zv.io">Zach van Rijn</a>, we have a collection of static toolchains based on musl at <a href="https://musl.cc">musl.cc</a> at our disposal. The number of supported systems is rather mind blowing, you got everything from the usual i686 to MIPS to Microblaze and many others.
<a href="https://github.com/ariya/fastlz/actions">https://github.com/ariya/fastlz/actions</a></p>
<p>As I search for a viable alternative to the cross-compilation method based on Dockcross (see my previous blog post: <a href="https://ariya.io/2019/06/cross-compiling-with-docker-on-wsl-2/">Cross Compiling with Docker on WSL 2</a>), musl.cc fits the requirements nicely. I am in the process of migrating the <a href="https://github.com/ariya/fastlz/actions">continuous integration</a> of FastLZ, my implementation of byte-aligned LZ77 compression algorithm, to be completely based on musl.cc.</p>
<p>Here is a quick walkthrough. As long as you are on Linux x86-64, you can follow along easily (and yes, this also works great on <a href="https://docs.microsoft.com/en-us/windows/wsl">WSL</a>, Windows Subsystems for Linux). As a reference, we will use the simplest ANSI C/C90 program available at <a href="https://github.com/ariya/hello-c90">github.com/ariya/hello-c90</a>.</p>
<p><img src="https://ariya.io/images/2020/06/crosscompiler.png" width="584" height="357" alt="Cross compilation with musl Toolchains"/></p>
<p>First and foremost, we need <a href="https://qemu.org">QEMU</a> so we can test our binaries not native to x86-64. For convenience, GNU Make is also necessary.</p>
<pre><code>$ sudo apt install -y qemu-users make
</code></pre>
<p>After that, let us grab the Hello C90 program:</p>
<pre><code>$ git clone https://github.com/ariya/hello-c90.git
$ cd hello-c90
</code></pre>
<p>For a start, let us try to produce MIPS64 binary of our little Hello C90 program. Thus, we ought to grab the toolchains first, weighing at about 90 MB.</p>
<pre><code>$ curl -O https://musl.cc/mips64-linux-musl-cross.tgz
$ tar xzf mips64-linux-musl-cross.tgz
</code></pre>
<p>To ensure that this fresh cross-compiler works, do a quick sanity check:</p>
<pre><code>$ ./mips64-linux-musl-cross/bin/mips64-linux-musl-gcc --version
mips64-linux-musl-gcc (GCC) 9.3.0
Copyright (C) 2019 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
</code></pre>
<p>This looks good! Now we can compile our Hello C90 program statically:</p>
<pre><code>$ make CC=./mips64-linux-musl-cross/bin/mips64-linux-musl-gcc LDFLAGS=-static
./mips64-linux-musl-cross/bin/mips64-linux-musl-gcc -O -Wall -std=c90 -c hello.c
./mips64-linux-musl-cross/bin/mips64-linux-musl-gcc -static -o hello hello.o
</code></pre>
<p>Checking the resulting binary should give the following:</p>
<pre><code>$ file ./hello
./hello: ELF 64-bit MSB executable, MIPS, MIPS-III version 1 (SYSV), statically linked, not stripped
</code></pre>
<p>It is exactly what we want! To run the executable:</p>
<pre><code>$ qemu-mips64 ./hello
Hello, world! From C90 with love...
</code></pre>
<p>Now, if you are doing this on WSL, or generally have a Windows machine available elsewhere, there is this fun activity of cross-compiling the above app for Windows, without the need for any Windows compiler and SDK. Same steps as before:</p>
<pre><code>$ curl -O https://musl.cc/x86_64-w64-mingw32-cross.tgz
$ tar xzf x86_64-w64-mingw32-cross.tgz
$ make CC=./x86_64-w64-mingw32-cross/bin/x86_64-w64-mingw32-gcc LDFLAGS=-static
$ file ./hello.exe
./hello.exe: PE32+ executable (console) x86-64, for MS Windows
</code></pre>
<p>To really test it, just bring <code>hello.exe</code> to Windows and it is going to run as expected.</p>
<p>For more details and elaborated examples, check the collection of <a href="https://github.com/ariya/hello-c90/tree/master/.github/workflows">workflows YAML files</a> of this Hello C program.</p>
<p>Combined with the continuous integration system of your choice, whether it is DIY via Jenkins or using one of the many services out there (GitHub Actions, Azure Pipelines, Travis CI), creating binaries for various operating systems becomes easier than ever!</p>
Nix Package Manager on Ubuntu or Debian
https://ariya.io/2020/05/nix-package-manager-on-ubuntu-or-debian
Sat, 30 May 2020 20:11:45 -0700https://ariya.io/2020/05/nix-package-manager-on-ubuntu-or-debian<p>Even though Ubuntu/Debian is equipped with its legendary powerful package manager, <em>dpkg</em>, in some cases, it is still beneficial to take advantage of <a href="https://nixos.org/nix">Nix</a>, a purely functional package manager.</p>
<p>The <a href="https://nixos.org/nix/manual">complete manual</a> of Nix does a fantastic job on explaining how to install and use it. But for the impatients among you, here is a quick overview. Note that this also works well when using Ubuntu/Debian under WSL (<a href="https://ubuntu.com/wsl">Windows Subsystem for Linux</a>, both the original and the newest WSL 2.</p>
<p><img align="right" src="https://ariya.io/images/2020/05/nix.png" width="347" alt="Nix on Debian"/></p>
<p>First, create the <code>/nix</code> directory owned by you (this is the common <a href="https://nixos.org/nix/manual/#sect-single-user-installation">single-user installation</a>):</p>
<pre><code>$ sudo mkdir /nix
$ sudo chown ariya /nix
</code></pre>
<p>And then, run the installation script:</p>
<pre><code>$ sh <(curl -L https://nixos.org/nix/install) --no-daemon
</code></pre>
<p>Note that if you use WSL 1, likely you will encounter some error such as:</p>
<pre><code>SQLite database '/nix/var/nix/db/db.sqlite' is busy
</code></pre>
<p>This a known <a href="https://github.com/NixOS/nix/issues/2651">issue</a>, the workaround is to create a new file <code>~/.config/nix/nix.conf</code> with the following content</p>
<pre><code>sandbox = false
use-sqlite-wal = false
</code></pre>
<p>and repeat the previous step.</p>
<p>If nothing goes wrong, the script will perform the installation. Grab a cup of tea while waiting for it!</p>
<pre><code>downloading Nix 2.3.4 binary tarball for x86_64-linux
performing a single-user installation of Nix...
copying Nix to /nix/store......................................
replacing old 'nix-2.3.4'
installing 'nix-2.3.4'
unpacking channels...
</code></pre>
<p>Note that the last step (unpacking channels) can run for a very long time (no idea why, hope it will be fixed at some point). Just be patient.</p>
<p>To check whether Nix is successfully installed, we use the <em>Hello, world</em> tradition:</p>
<pre><code>$ nix-env -i hello
installing 'hello-2.10'
these paths will be fetched (6.62 MiB download, 31.61 MiB unpacked):
/nix/store/9l6d9k9f0i9pnkfjkvsm7xicpzn4cv2c-libidn2-2.3.0
/nix/store/df15mgn0zsm6za1bkrbjd7ax1f75ycgf-hello-2.10
/nix/store/nwsn18fysga1n5s0bj4jp4wfwvlbx8b1-glibc-2.30
/nix/store/pgj5vsdly7n4rc8jax3x3sill06l44qp-libunistring-0.9.10
$ which hello
/home/ariya/.nix-profile/bin/hello
$ hello
Hello, world!
</code></pre>
<p>In the above illustration, <code>hello</code> is a test package that does nothing but to display the famous message. It looks simple, and yet it is very useful!</p>
<p>To get the feeling of packages available at your disposal (almost 29 thousands of them):</p>
<pre><code>$ nix-env -qa > nix-packages.list
$ wc -l nix-packages.list
28974 nix-packages.list
$ less nix-package.list
</code></pre>
<p>While it is not a substitute for a large collection of existing Debian/Ubuntu packages, very often what you get from Nix is more up-to-date. For instance, if you are stuck with a typical Ubuntu 18.04 LTS, it offers git 2.17.1, tmux 2.6.3, jq 1.5, curl 7.58, and Neovim 0.2.2. But, with Nix on that same Ubuntu system, at the time of this writing, you can enjoy git 2.26.2, tmux 3.1b, jq 1.6, curl 7.69, and and Neovim 0.4.3.</p>
<p><img src="https://ariya.io/images/2016/06/nixshell.png" align="right"/>
The way I use Nix however is not merely as a mechanism to get fresher software and other utilities. Rather, the functional nature of Nix leads to the possibility of multiple working environment, with each distinctive set of applications and tools, and the ability of <em>switching cleanly</em> between them. Those who use <a href="https://github.com/nvm-sh/nvm">nvm</a> (for Node.js) or <a href="https://virtualenv.pypa.io">virtualenv</a> (for Python) probably can appreciate this. Now, imagine nvm/virtualenv but not only for Node.js/Python, and rather applied to an arbitrary set of packages. I have covered this in details before in my previous blog post, <a href="https://ariya.io/2016/06/isolated-development-environment-using-nix/">Isolated Development Environment using Nix</a>. That blog post talked about Nix on macOS but obviously the experience is very suited for Nix on Debian, Ubuntu, or any other Linux distributions for that matter.</p>
<p>I hope this will inspire you to explore <a href="https://nixos.org/nix">Nix</a> in depth!</p>
Practical Testing of Firebase Projects
https://ariya.io/2020/04/practical-testing-of-firebase-projects
Wed, 29 Apr 2020 12:10:04 -0700https://ariya.io/2020/04/practical-testing-of-firebase-projects<p>Your little Firebase project is getting bigger every day? Never underestimate the need to establish a solid and firm integration tests from the get go.</p>
<p>Once you start to utilize various features of Firebase, from <a href="https://firebase.google.com/docs/hosting">Hosting</a>, <a href="https://firebase.google.com/docs/functions">Functions</a>, and <a href="https://firebase.google.com/docs/firestore/">Firestore</a>, it is imperative to incorporate practical local testing as soon as possible. Not only it will save you from some potential nightmares down the road, it can also facilitate faster iterations and quick(er) turn-around time during refactoring and feature implementation. Here is a few random suggestions to get you started. To follow along, you can also check the git repository containing the sample code at <a href="https://github.com/ariya/hello-firebase-experiment">github.com/ariya/hello-firebase-experiment</a>.</p>
<p><img src="https://ariya.io/images/2020/04/hellofirebase.png" width="80%" alt="Hello Firebase project in Visual Studio Code editor"/></p>
<p>First thing that you always need to do is to implement a <strong>health check</strong> functionality. The name could be as simple as <code>ping</code>. Hence, inside your main Firebase Functions, there should be a block of code that looks like:</p>
<pre><code class="language-js">exports.ping = functions.https.onRequest((request, response) => {
response.send('OK');
});
</code></pre>
<p>Now if you want to be fancy, it does not hurt to show the timestamp (<a href="https://www.epochconverter.com/">Unix epoch</a>) which can be valuable to know that this is not a cached or outdated HTTP response. If you wish, feel free to extend it with useful tidbits (but be careful not to reveal sensitive information).</p>
<pre><code class="language-js">exports.ping = functions.https.onRequest((request, response) => {
response.send(`OK ${Date.now()}`);
});
</code></pre>
<p>In your test code (shown here with <a href="https://www.npmjs.com/package/axios">Axios</a> to perform an HTTP request, but the concept applies to any library), do a quick sanity check that this <code>/ping</code> is working. This is an important step towards a reliable <strong>local testing</strong>.</p>
<pre><code class="language-js">it('should have a working ping function', async function () {
const res = await axios.get('http://localhost:5000/ping');
const status = res.data.substr(0, 2);
const timestamp = res.data.substr(3);
expect(status).toEqual('OK');
expect(timestamp).toMatch(/[0-9]+/);
});
</code></pre>
<p>Now, the test might fail miserably. If that is the case, you do not have the proper setup yet to use and run <a href="https://firebase.google.com/docs/rules/emulator-setup">Firebase emulators</a>. Using npm, make sure to install all the following packages:</p>
<pre><code>firebase-tools
firebase-functions
firebase-functions-test
firebase-admin
@google-cloud/firestore
</code></pre>
<p>And check that your <code>firebase.json</code> looks like the following:</p>
<pre><code class="language-json">{
"hosting": {
"public": "./public"
"rewrites": [
{
"source": "/ping",
"function": "ping"
}
]
},
"emulators": {
"functions": {
"port": 5001
},
"firestore": {
"port": 8080
},
"hosting": {
"port": 5000
}
}
</code></pre>
<p>Note the <code>rewrites</code> section. This makes <code>/ping</code> handily available from the main Firebase Hosting domain, instead of the long and cryptic one such as <code>us-central1-YOURFIREBASEPROJECT.cloudfunctions.net/ping</code>.</p>
<p>Before running tests, make sure to launch the emulators for Functions, Firestore, and Hosting:</p>
<pre><code>npm run firebase -- emulators:start --project MYPROJECT
</code></pre>
<p>In the above command, <code>npm run firebase</code> works because of the run script definition. Also, substitute the name of your Firebase project accordingly. If the setup is correct, your terminal should show something like:</p>
<pre><code>emulators: Starting emulators: functions, hosting
hub: emulator hub started at http://localhost:4400
functions: functions emulator started at http://localhost:5001
hosting: Serving hosting files from: ./
hosting: Local server: http://localhost:5000
hosting: hosting emulator started at http://localhost:5000
functions[ping]: http function initialized
emulators: All emulators started, it is now safe to connect.
</code></pre>
<p>At this point, if you point your browser to <code>localhost:5000/ping</code>, you should get the <em>OK</em> message (followed by the number representing the timestamp as Unix epoch). Of course, running the full tests (<code>npm test</code>) should also yield in a successful run.</p>
<p>When setting up the tests for CI (continuous integration), it might be easier to <strong>let the emulators run the test automatically</strong>. Here is how it is done:</p>
<pre><code>npm run firebase -- emulators:exec "npm test" --project MYPROJECT
</code></pre>
<p>The <code>exec</code> option run the subsequent command, in this case the usual <code>npm test</code>, after starting the emulators. Once the command is completed (whether successfully or not), the emulators are automatically terminated. This is <a href="https://firebase.google.com/docs/emulator-suite/install_and_configure#integrate_with_your_ci_system">perfect for the CI run</a>!</p>
<p>Next trick on our sleeve: <strong>fixtures for Firestore</strong>. Let us assume that your application uses this NoSQL datastore via this simple function for illustration (and do not forget to add a new URL rewrite for <code>/answer/</code>):</p>
<pre><code class="language-js">admin.initializeApp(functions.config().firebase);
const db = admin.firestore();
exports.answer = functions.https.onRequest(async (request, response) => {
try {
const doc = await db.collection('universe').doc('answer').get();
const value = doc.data().value;
console.log(`Answer is ${value}`);
response.send(`Answer is ${value}`);
} catch (err) {
console.error(`Failed to obtain the answer: ${err.toString()}`);
response.send(`EXCEPTION: ${err.toString()}`);
}
});
</code></pre>
<p>And the corresponding test:</p>
<pre><code class="language-js">it('should give a proper answer', async function () {
const res = await axios.get('http://localhost:5000/answer');
const answer = res.data;
expect(answer).toEqual('Answer is 42');
});
</code></pre>
<p>Launching the emulators (using the previous instructions) and running the tests however will result in a failure. And if you go to localhost:5000/answer, you fill discover an expected response:</p>
<pre><code>EXCEPTION: TypeError: Cannot read property 'value' of undefined
</code></pre>
<p>This should not come as a surprise. When Firebase Emulators launched, its database (for Firestore) is empty. Hence, there is still no proper document, let alone a collection. It will be unnecessarily tedious to populate the database (it works for this simple example but a real-world app might have tons of collections and documents). How do we prepare a fixture for this?</p>
<p>Well, again the Firestore emulators to the rescue! While it is running, and you can perform another steps to populate the database (outside the scope of this blog post, perhaps we will discuss in some other time), you can <a href="https://firebase.google.com/docs/emulator-suite/install_and_configure#export_and_import_emulator_data">snapshot the database</a> and save it as the test fixture:</p>
<pre><code>npm run firebase -- emulator:export spec/fixture --project MYPROJECT
</code></pre>
<p>Once the fixture is available, rerun the emulator (either as <code>start</code> or through <code>exec</code>) with the <code>import</code> option and the Firestore database will not be empty anymore, as it is populated with the previous snapshot.</p>
<pre><code>npm run firebase -- emulators:start --import spec/fixture --project MYPROJECT
</code></pre>
<p>Last but not least, let us run this test as <strong>an automation workflow</strong> using <a href="https://github.com/features/actions">GitHub Actions</a>. All you need is a file named <code>.github/workflow/test.yml</code> with the following content:</p>
<pre><code class="language-yaml">name: Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Use Node.js
uses: actions/setup-node@v1
with:
node-version: 10.x
- run: npm ci
- run: npm run firebase -- emulators:exec "npm test" --import spec/fixture
env:
CI: true
</code></pre>
<p>As it turns out, it is not too difficult to set up some practical tests of a Firebase project!</p>
Search Box and Cloud Function
https://ariya.io/2020/03/search-box-and-cloud-function
Tue, 31 Mar 2020 23:45:57 -0700https://ariya.io/2020/03/search-box-and-cloud-function<p>For a blog hosted with Firebase Hosting, it turns out that a little search box is fairly easy to implement by using Cloud Functions for Firebase.</p>
<p>As with the current trend nowadays, this blog is a static site prepared with <a href="http://gohugo.io/">Hugo</a> and deployed to <a href="https://firebase.google.com/docs/hosting/">Firebase</a> (see my previous blog: <a href="https://ariya.io/2017/05/static-site-with-hugo-and-firebase/">Static Site with Hugo and Firebase</a>). Some time ago, I realized that since I am using Firebase anyway, might as well take advantage of its <a href="https://firebase.google.com/docs/functions/">Cloud Functions</a> to add a little search functionality to the blog, particularly for its <a href="https://firebase.google.com/docs/hosting/full-config#404">404 page</a>.</p>
<p><img src="https://ariya.io/images/2020/03/searchbox.png" width="50%" alt="search box"/></p>
<p>Of course, I am cheating a little bit. Using the above search box actually just redirects the search to my favorite search engine, <a href="https://duckduckgo.com">DuckDuckGo</a>, resulting in the following:</p>
<p><img src="https://ariya.io/images/2020/03/duck.png" width="50%" alt="DuckDuckGo search"/></p>
<p>Implementing it is almost trivial. First, we need <code>index.js</code> inside the <code>functions</code> subdirectory with the content as short as this (obviously, for your blog, replace <code>site</code> accordingly):</p>
<pre><code class="language-javascript">const functions = require('firebase-functions');
exports.search = functions.https.onRequest((request, response) => {
const q = request.query.q || '';
response.redirect(`https://duckduckgo.com/?q=site:ariya.io+${q}`);
});
</code></pre>
<p>Once it is properly deployed, the trigger URL will be in the form of <code>us-central1-YOURFIREBASEPROJECT.cloudfunctions.net/search</code>. This is rather ugly. To overcome that, set up a <a href="https://firebase.google.com/docs/hosting/full-config#rewrites">rewrite</a> inside <code>firebase.json</code> so that it looks something like:</p>
<pre><code>{
"hosting": {
"rewrites": [{
"source" : "/search",
"function": "search"
}
]
}
}
</code></pre>
<p>and thus, the function is available as the top-level <code>/search</code> of your Firebase Hosting URL, including if it is a custom domain.</p>
<p>After this, inserting the search box is also equally fun:</p>
<pre><code class="language-html"><form action="/search">
<p><input type="text" name="q" required> <button type="submit">Search</button></p>
</form>
</code></pre>
<p>When a visitor uses the search, they will get redirected to DuckDuckGo and be presented with the search result. Fast and easy!</p>
Automatic Merge of Pull Requests
https://ariya.io/2020/02/automatic-merge-of-pull-requests
Sat, 29 Feb 2020 23:01:57 -0800https://ariya.io/2020/02/automatic-merge-of-pull-requests<p>After using Azure DevOps for a while, I am totally sold on its Auto Complete feature for pull requests. While it does not apply universally, I do believe that any development process should be at the level where merging pull requests, or generalizing it, integrating all forms of contribution, should be as automatic and as hassle-free as possible.</p>
<p>If you are not familiar yet with <a href="https://azure.microsoft.com/en-us/services/devops">Azure DevOps</a>, it is basically a pay-as-you-go service for code repositories, automatic build runs, task tracker, artifact management, etc. Azure DevOps is pretty much comparable to various other similar services, such as GitHub, GitLab, Bitbucket, and many others. Note that although it bears the name Azure, you do <em>not</em> need to use any other Azure services to be able to take advantage of Azure DevOps offering (similar to how you can use Google Maps but without the need to store your files at Google Drive or host your email with Gmail).</p>
<p>One feature that makes Azure DevOps (at the time of this writing) unique compared to others is its ability to mark a PR (pull request) as <em>Auto Complete</em>. To do this, go to the sidebar and choose <em>Branches</em> (under <em>Repo</em> menu group). Once the branch list is displayed, hover on e.g. <em>master</em> and pick its context menu (rightmost three-dot menu) and choose <em>Branch policies</em>. Pick some settings which suit your need. Make sure to customize the <em>Build validation</em>, this is done by adding a simple build policy.</p>
<p><img src="https://ariya.io/images/2020/02/autocomplete.png" width="75%" alt="Enable auto completion"/></p>
<p>Now, whenever you create a pull request, there is a noticeable blue button, <em>Set Auto complete</em>, on the pull request page. Basically what it does is the automatic merging of the pull requests of two conditions are fulfilled:</p>
<ul>
<li>the pull request is approved (by one or more reviewers, per branch policy)</li>
<li>the build succeeds, i.e. as configured with its continuous integration</li>
</ul>
<p>There are also a few tweaks possible. For instance, you have the option to squash the branch, rebase and fast-forward, etc. Even better, there is an option to automatically delete the branch once it is merged, which can really help to reduce clutter.</p>
<p>Removing the manual step of merging an approved pull request will eliminate one more thing than we, human being, need to be involved with. Who would not enjoy less amount of cognitive load? I hope other services such as GitHub, GitLab, Bitbucket, and many more will follow suit and implement the same feature!</p>
Clang on Windows
https://ariya.io/2020/01/clang-on-windows
Sun, 05 Jan 2020 14:46:09 -0800https://ariya.io/2020/01/clang-on-windows<p>Thanks to the MSYS2 project, now there is an easy way to utilize Clang to build C/C++ application on Windows. This works equally well for both 32-bit and 64-bit programs.</p>
<p><a href="https://www.msys2.org/">MSYS2</a> is a fantastic (and better) reimagination of <a href="https://www.cygwin.com/">Cygwin</a>, it is like taking the best part of a typical modern Unix environment (a familiar shell, a general collection of utilities, a porting layer, a package manager, and so on) while still working on Windows. Bootstrapping into MSYS2 is easy, either install it directly (using the GUI installer) or use <a href="https://chocolatey.org/">Chocolatey</a>: <code>choco install msys2</code>. Once inside its shell, <code>pacman</code> is the go-to, ever-so-powerful <a href="https://github.com/msys2/msys2/wiki/Using-packages">package manager</a>, with thousands of packages available at your disposal.</p>
<p>This of course, includes the toolchain. Not only the latest GCC is there, but we also have <a href="https://clang.llvm.org/">Clang</a>! To illustrate the concept, let us go back to the simple ANSI C/C90 program covered in the <a href="https://ariya.io/2019/07/continuous-integration-of-vanilla-c-programs-for-intel-arm-and-mips-architecture">previous blog post</a>. Once we clone the repository, open MSYS2 32-bit shell and try the following:</p>
<pre><code>pacman -S msys/make mingw32/mingw-w64-i686-clang
</code></pre>
<p>It is a simple step to install both Make and Clang. Wait a bit and after that, do the usual magic:</p>
<pre><code>CC=clang make
</code></pre>
<p>A caveat here, Clang for Windows does not append the <code>.exe</code> suffix for the executable. Thus, a quick rename to the rescue:</p>
<pre><code>ren hello hello.exe
</code></pre>
<p>And now you can run, inspect, analyze the executable as usual.</p>
<p><img src="https://ariya.io/images/2020/01/clang-msys2.png" alt="Pipelines Clang on Windows" /></p>
<p>To incorporate it into the continuous integration using Azure Pipelines (again, see the <a href="https://ariya.io/2019/07/continuous-integration-of-vanilla-c-programs-for-intel-arm-and-mips-architecture">previous blog post</a>), we shall construct a new job. The basic step is as follows.</p>
<pre><code class="language-yaml">- job: 'i686_windows_clang'
pool:
vmImage: 'vs2017-win2016'
variables:
PACMAN_PACKAGES: C:\tools\msys64\var\cache\pacman\pkg
CC: clang
</code></pre>
<p>First, programmatically install MSYS2:</p>
<pre><code class="language-yaml"> - script: choco install --no-progress msys2
displayName: 'Install MSYS2'
</code></pre>
<p>After that, perform some pacman maintenances:</p>
<pre><code class="language-yaml"> - script: |
pacman -Sy
pacman --noconfirm -S pacman-mirrors
workingDirectory: C:\tools\msys64\usr\bin\
displayName: 'Check pacman'
</code></pre>
<p>And then, we install the required packages. At the time of this writing, Clang <a href="http://releases.llvm.org/9.0.0/tools/clang/docs/">version 9.0</a> (the latest) will be installed.</p>
<pre><code class="language-yaml"> - script: pacman --noconfirm -S msys/make mingw64/mingw-w64-x86_64-clang
workingDirectory: C:\tools\msys64\usr\bin\
displayName: 'Install requirements'
</code></pre>
<p>For the x86 architecture (aka, 32-bit Intel/AMD), install a different package:</p>
<pre><code class="language-yaml"> - script: pacman --noconfirm -S msys/make mingw32/mingw-w64-i686-clang
workingDirectory: C:\tools\msys64\usr\bin\
displayName: 'Install requirements'
</code></pre>
<p>And now, down to the actual build step:</p>
<pre><code class="language-yaml"> - script: |
set PATH=C:\tools\msys64\usr\bin;C:\tools\msys64\mingw64\bin;%PATH%
make
ren hello hello.exe
displayName: 'make'
</code></pre>
<p>As a minor tweak, we can also cache pacman downloaded packages. In the above example, it hardly matters since we only install Make and Clang. But if you have a larger application, e.g. requiring Python, Qt, and so on, it is wide to avoid the CI run redownloading the same packages again and again (saving bandwith, and also being nice to those mirrors). We can achieve this by using the <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/caching">Cache task</a> from Azure Pipelines. Simply insert this after MSYS2 installation step.</p>
<pre><code class="language-yaml"> - task: Cache@2
inputs:
key: pacman
restoreKeys: pacman
path: $(PACMAN_PACKAGES)
displayName: 'Cache pacman packages'
</code></pre>
<p>For the complete illustration of such a job, take a look at the actual <a href="https://github.com/ariya/hello-c90/blob/master/azure-pipelines.yml">azure-pipelines.yml</a> for the <a href="https://github.com/ariya/hello-c90">hello-c90</a> project.</p>
<p>Clang everywhere, yay!</p>
Continuous Integration of Vanilla C Programs for Intel, ARM, and MIPS Architecture
https://ariya.io/2019/07/continuous-integration-of-vanilla-c-programs-for-intel-arm-and-mips-architecture
Mon, 22 Jul 2019 15:33:34 -0700https://ariya.io/2019/07/continuous-integration-of-vanilla-c-programs-for-intel-arm-and-mips-architecture<p>Developing cross-platform applications presents a major challenge:, how to ensure that every commit does not break some combinations of operating systems and CPU architectures. Fortunately, thanks an array of online services and open-source tools, this challenge becomes easier to tackle.</p>
<p>For this demo, I have the traditional <em>Hello, world</em> program written in ANSI C/C90 at this repository: <a href="https://github.com/ariya/hello-c90">github.com/ariya/hello-c90</a> (feel free to take a look). The objective is to verify its automatic build (for the purpose of continuous integration) for a number of different CPU architectures, operating systems, as well as the C/C++ compilers. Supported CPU architectures are (using <a href="https://wiki.debian.org/SupportedArchitectures">Debian nomenclatures</a>) are amd64, i386, i686, armhf, arm64, and mips. Among some C/C++ compilers to be tested are <a href="https://gcc.gnu.org/">GCC</a>, <a href="https://clang.llvm.org/">Clang</a>, <a href="https://bellard.org/tcc/">TCC</a>, <a href="https://docs.microsoft.com/en-us/cpp">Visual C/C++</a> (as part of Visual Studio 2017 and also 2019), <a href="http://www.smorgasbordet.com/pellesc/">Pelles C</a>, <a href="https://digitalmars.com/">Digital Mars</a>, as well as <a href="http://mingw.org/">MinGW</a>. Obviously, some combinations are not available. For instance, there is no such thing (at least, not yet) as Visual C/C++ for Linux or MinGW targeting macOS.</p>
<p><img src="https://ariya.io/images/2019/07/ci.png" alt="Build jobs" /></p>
<p>In this particular blog post, we will use <a href="https://azure.microsoft.com/en-us/services/devops/pipelines/">Azure Pipelines</a>, a hosted build system supporting all three major OS: Windows, macOS, and Linux. For the DIY among you, the same setup can be achieved by using something like <a href="https://jenkins.io/">Jenkins</a>, <a href="https://docs.gitlab.com/ce/ci/">GitLab CI</a>, <a href="https://www.jetbrains.com/teamcity/">TeamCity</a>, and many other alternatives, along with the some build agents for the corresponding OS you want to tackle.</p>
<p>The build itself is configured via the YAML file, <code>azure-pipelines.yml</code>. There is a job for each unique combination of (Architecture, Operating System, Compiler). For example, <code>amd64_linux_gcc</code> denotes the build job for binary for Linux on Intel/AMD 64-bit architecture, compiled using GCC. As for now, the total number of those jobs is 16.</p>
<p>The obvious build job is something like this. It is running natively on the hosted agent of Azure Pipelines. We just need to make sure that the right compiler (GCC in this case) is installed. For Linux and macOS, this can be via the package manager, <a href="https://wiki.debian.org/Apt">apt</a> and <a href="https://brew.sh/">Homebrew</a>, respectively.</p>
<pre><code class="language-Makefile">- job: 'amd64_linux_gcc'
pool:
vmImage: 'ubuntu-16.04'
steps:
- script: sudo apt install -y make gcc
displayName: 'Install requirements'
- script: gcc --version
displayName: 'Verify tools version'
- script: CC=gcc make
displayName: 'make'
- script: file ./hello
displayName: 'Verify executable'
- script: ./hello
displayName: 'Run'
</code></pre>
<p>On Windows however, there is no need to do that since the hosted Windows agent is already equipped with Visual Studio. However, because the build is carried out with a Makefile (more specifically, <code>Makefile.win</code>), we need GNU Make which is installed via <a href="https://chocolatey.org/">Chocolatey</a>. Note that a stage in the build job is verifying the executable (useful to know whether it is built correctly or not) using <code>file</code> (Linux and macOS) or <code>dumpbin</code> (Windows).</p>
<p>For two special Windows compilers, <a href="https://digitalmars.com/">Digital Mars</a> and <a href="http://www.smorgasbordet.com/pellesc/">Pelles C</a> (the Windows flavor of a modified TCC), they need to be installed on the fly since they are not available on the Windows hosted agents. Digital Mars is installed with a little dance with <code>curl</code> and <code>unzip</code>. Meanwhile, Pelles C is readily available from Chocolatey.</p>
<p>To target non-Intel CPU architectures, we need to use some cross compilers. Since hosted Linux agent of Azure Pipelines supports Docker, the easiest way to achieve this is to use a Docker-based cross compilation using <a href="https://github.com/dockcross/dockcross">dockcross</a>. This is explained in-depth in my previous blog post, <a href="https://ariya.io/2019/06/cross-compiling-with-docker-on-wsl-2">Cross Compiling with Docker</a>. One of such example is the following build job, for building for Linux running on ARM (32-bit). Note that since the resulting exectable is an ARM binary, we ought to use <a href="https://www.qemu.org/">QEMU</a> to run it.</p>
<pre><code class="language-Makefile">- job: 'armhf_linux_gcc'
pool:
vmImage: 'ubuntu-16.04'
steps:
- script: sudo apt install -y qemu-user
displayName: 'Install requirements'
- script: |
git clone --depth 1 https://github.com/dockcross/dockcross.git
cd dockcross
docker run --rm dockcross/linux-armv7 > ./dockcross-linux-armv7
chmod +x ./dockcross-linux-armv7
displayName: 'Prepare Dockcross'
- script: ./dockcross/dockcross-linux-armv7 bash -c '$CC --version'
displayName: 'Verify tools version'
- script: ./dockcross/dockcross-linux-armv7 make LDFLAGS=-static
displayName: 'make'
- script: file ./hello
displayName: 'Verify executable'
- script: qemu-arm ./hello
displayName: 'Run'
</code></pre>
<p>The same approach using Docker and QEMU works well for other CPU architectures such as MIPS, ARM 64-bit, and in fact Intel x86. The last one is quite necessary, since the hosted agent of Azure Pipelines is running in 64-bit mode. Thus, we use this virtualization layer (QEMU) to verify the correct execution of 32-bit binary.</p>
<p>As an illustration, two examples for MingW are illustrated. The first, MinGW is installed on the Windows agent. This is self explanatory.</p>
<pre><code class="language-Makefile">- job: 'amd64_windows_mingw'
pool:
vmImage: 'vs2017-win2016'
variables:
CC: 'gcc'
steps:
- script: choco install mingw --version 8.1.0
displayName: 'install MinGW-w64'
- script: gcc --version
displayName: 'Verify tools version'
- script: make
displayName: 'make'
- script: file hello.exe
displayName: 'Verify executable'
- script: hello.exe
displayName: 'Run'
</code></pre>
<p>For the second example, MinGW is used in a cross compilation fashion. Again, we use the Docker-based dockcross to achieve this. The compiler (GCC) runs inside the Docker container on the hosted Linux agent, however it produces a Windows executable. How do we run the resulting executable? QEMU is not suitable here (since we still need to install or run Windows, remember the host is Linux). But, we have <a href="https://www.winehq.org/">WINE</a> to the rescue!</p>
<pre><code class="language-Makefile">- job: 'i386_windows_mingw_static'
pool:
vmImage: 'ubuntu-16.04'
steps:
- script: |
git clone --depth 1 https://github.com/dockcross/dockcross.git
cd dockcross
docker run --rm dockcross/windows-static-x86 > ./dockcross-windows-static-x86
chmod +x ./dockcross-windows-static-x86
displayName: 'Prepare Dockcross'
- script: ./dockcross/dockcross-windows-static-x86 bash -c '$CC --version'
displayName: 'Verify tools version'
- script: ./dockcross/dockcross-windows-static-x86 make
displayName: 'make'
- script: file ./hello
displayName: 'Verify executable'
- script: docker run -v $PWD:/app tianon/wine:32 bash -c "wine /app/hello"
displayName: 'Run'
</code></pre>
<p>In fact, to avoid the hassle of on-the-fly installation/configuration of WINE, we just use the Dockerized WINE.</p>
<p>The whole ordeal of running 16 jobs will take anywhere from 5 minutes to 20 minutes. Obviously, if you are constrainted by the free tier of Azure Pipelines, you can purchase access to more hosted agents or attach your own build agents, which will definitely parallelize and speed things up.</p>
<p>I hope that the idea outlined in this post will inspire to continue to work on more cross-platform apps. Of course, it does not have to be an application written in ANSI C. The concept can be applied to D, Go, Rust, and many other modern compilers.</p>
Cross Compiling with Docker on WSL 2
https://ariya.io/2019/06/cross-compiling-with-docker-on-wsl-2
Sun, 30 Jun 2019 14:24:51 -0700https://ariya.io/2019/06/cross-compiling-with-docker-on-wsl-2<p>Now that WSL 2 packs a true Linux kernel and supports Linux containers (via Docker), it can be a perfect setup to perform application cross compilations.</p>
<p>While <a href="https://engineering.docker.com/2019/06/docker-hearts-wsl-2/">Docker for Windows</a> will soon support WSL 2, it is just easier to use WSL 2 as is, install Docker, and use it. In case you are new to the <a href="https://docs.microsoft.com/en-us/windows/wsl/install-win10">wonderful world</a> of WSL, check the <a href="https://docs.microsoft.com/en-us/windows/wsl/wsl2-install">documentation</a> to have it installed. Note that <a href="https://devblogs.microsoft.com/commandline/announcing-wsl-2/">for WSL 2</a>, you need to be using Windows Insiders for now. <strong>Update:</strong> No need to use Windows Insiders anymore as WSL 2 is now included with <a href="https://docs.microsoft.com/en-us/windows/release-information/status-windows-10-2004">Windows 10 June 2020 update (version 2004)</a>.</p>
<p><img src="https://ariya.io/images/2019/06/wsl2.png" alt="WSL Initial Screen" /></p>
<p>Once Ubuntu 18.04 is installed (the default for WSL), you can verify that it is indeed working:</p>
<pre><code>$ uname -a
Linux XPS 4.19.43-microsoft-standard #1 SMP Mon May 20 19:35:22 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
$ cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04.2 LTS"
</code></pre>
<p>If <code>uname -a</code> returns something like this instead:</p>
<pre><code>$ uname -a
Linux XPS 4.4.0-18362-Microsoft #1-Microsoft Mon Mar 18 12:02:00 PST 2019 x86_64 x86_64 x86_64 GNU/Linux
</code></pre>
<p>then you still got the original WSL, not WSL 2. Refer to the <a href="https://docs.microsoft.com/en-us/windows/wsl/wsl2-install">documentation</a> again on how to enable WSL 2 instead.</p>
<p>Next step is to install <a href="https://www.docker.com/">Docker</a>. There are tons of tutorial on this subject, see for instance <a href="https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-ubuntu-18-04">this guide</a> on Digital Ocean. Once it is properly installed, start the daemon by running:</p>
<pre><code>$ sudo service docker start
</code></pre>
<p>At this point, it is likely wise to add yourself to the proper group, to avoid using sudo all the time:</p>
<pre><code>$ sudo usermod -aG docker $USER
</code></pre>
<p>Before we do something crazy, let us ensure that Docker works:</p>
<pre><code>$ docker run hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.
</code></pre>
<p>Since the topic is cross-compilation, let us assume there is a rather simplistic <em>Hello world</em> in ANSI C/C90/C99:</p>
<pre><code class="language-cpp">#include <stdio.h>
int main(int argc, char** argv) {
printf("Hello, world!\n");
return 0;
}
</code></pre>
<p>with the following <code>Makefile</code>:</p>
<pre><code>.POSIX:
.SUFFIXES:
CFLAGS = -O -Wall -std=c90
all: hello
hello: hello.o
$(CC) $(LDFLAGS) -o hello hello.o
.SUFFIXES: .c .o
.c.o:
$(CC) $(CFLAGS) -c $<
</code></pre>
<p><img src="https://ariya.io/images/2019/06/crosscompile.png" alt="Cross-compilation with Dockcross" /></p>
<p>Run a quick test on your host system (assuming <code>gcc</code> and friends are already available):</p>
<pre><code>$ gcc -o hello hello.c
$ ./hello
"Hello, world!"
</code></pre>
<p>What about compiling this marvelous C program for ARM? Another tool we would need is this excellent project, <a href="https://github.com/dockcross/dockcross">Dockcross</a>, a Docker-based setup for painless cross-compilation.</p>
<pre><code>$ git clone https://github.com/dockcross/dockcross.git && cd dockcross
$ docker run --rm dockcross/linux-armv7 > ./dockcross-linux-armv7
$ chmod +x ./dockcross-linux-armv7
</code></pre>
<p>Cross-compile the program for ARM v7 (this is for 32-bit ARM architecture):</p>
<pre><code>$ ./dockcross-linux-armv7 bash -c "$CC -o hello hello.c -static"
</code></pre>
<p>Or better, use the Makefile, and hence the simplified command:</p>
<pre><code>$ ./dockcross-linux-armv7 make LDFLAGS=-static
/usr/xcc/armv7-unknown-linux-gnueabi/bin/armv7-unknown-linux-gnueabi-gcc -O -Wall -std=c90 -c hello.c
/usr/xcc/armv7-unknown-linux-gnueabi/bin/armv7-unknown-linux-gnueabi-gcc -static -o hello hello.o
</code></pre>
<p>As the evidence that this is not a native host binary anymore, verify the freshly baked executable:</p>
<pre><code>$ file ./hello
hello: ELF 32-bit LSB executable, ARM, EABI5 version 1 (GNU/Linux), statically linked, for GNU/Linux 4.10.8, with debug_info, not stripped
</code></pre>
<p>Whoa! That was indeed quite painless.</p>
<p>How do we execute this file to ensure that it is working as expected? <a href="https://www.qemu.org/">QEMU</a>, another great open-source project, to the rescue! First make sure that it is there (we need the <a href="https://packages.ubuntu.com/bionic/qemu-user">qemu-user package</a>).</p>
<pre><code>$ sudo apt -y install qemu-user
</code></pre>
<p>And now the fun time: running the ARMv7 binary we built earlier:</p>
<pre><code>$ qemu-arm ./hello
"Hello, world!"
</code></pre>
<p>Of course, this is just one target architecture. Amazingly, Dockcross supports a wide range of cross-compilation targets, including for MIPS and PowerPC architectures, or even building for Windows and WebAssembly.</p>
<p>Here is how to build a Windows executable in 3 easy steps:</p>
<pre><code>$ docker run --rm dockcross/windows-static-x86 > ./dockcross-windows-static-x86
$ chmod +x ./dockcross-windows-static-x86
$ ./dockcross-windows-static-x86 make
/usr/src/mxe/usr/bin/i686-w64-mingw32.static-gcc -O -Wall -std=c90 -c hello.c
/usr/src/mxe/usr/bin/i686-w64-mingw32.static-gcc -o hello hello.o
$ file ./hello
./hello: PE32 executable (console) Intel 80386, for MS Windows
</code></pre>
<p>And thanks to the excellent Windows-Linux <a href="https://docs.microsoft.com/en-us/windows/wsl/interop">interoperability</a> of WSL, you can also run the executable directly.</p>
<pre><code>$ ./hello
"Hello, world!"
</code></pre>
<p>In summary, even from a shiny Windows laptop, the combination of WSL 2, Docker, dockcross, and QEMU allows us to cross-compile apps for a number of processor architecture and operating system combinations.</p>
<p>Now, what kind of great apps do you plan to build today?</p>