Thoughts and Opinions raindev's blog Andrew Barchuk [email protected] 2022-09-15T00:00:00+00:00 Mixing Sync and Async Rust 2022-09-15T00:00:00+00:00 2018-09-25T00:00:00Z <p>Recently I have read <a href="">JEP 426</a>, Java enhancement proposal introducing virtual threads - essentially a way to map multiple Java threads to a few operating system threads. I thought it's brilliant, especially the fact that the virtual threads could run <em>unmodified</em> code.</p> <p>Rust take a different approach to overcoming the scalability issues of operating system threads with asynchronous runtimes and async/await language support. One of the issues however is that the code has to be adapted for the asynchronous model. While async/await syntax significantly improves the experience of writing asynchronous code which is still straightforward to understand, mixing both styles of programming is still very annoying. Or is it really?</p> <p>Let's look first at running an async function from a normal function. It's a common complaint that depending on a single async function &quot;infects&quot; the code and requires it to be asynchronous all the way. This is not quite the case. Execution an async function requires a runtime and here's how we get one that will run code on the caller thread:</p> <pre data-lang="rust" class="language-rust "><code class="language-rust" data-lang="rust">let runtime = tokio::runtime::Builder::new_current_thread() .enable_all() .build()? </code></pre> <p>Now running an async function is quite simple:</p> <pre data-lang="rust" class="language-rust "><code class="language-rust" data-lang="rust">let result = runtime.block_on(my_async_function); </code></pre> <p>Instead of <a href=""><code>new_current_thread</code></a> we could use <a href=""><code>new_multi_thread</code></a> to get a thread pool runtime that allows to run tasks asynchronously with <a href=""><code>Runtime::spawn</code></a> and wait for the completion of tasks with <a href=""><code>Runtime::block_on</code></a>.</p> <p>Alright, this wasn't too bad. What about running synchronous code from an async function? A simple function that does data transformation could be run without any fuss. The problem is code either doing blocking IO or CPU intensive computations as it would block one of the runtime threads and reduce the capacity available to execute async tasks. Thankfully the solution is straightforward:</p> <pre data-lang="rust" class="language-rust "><code class="language-rust" data-lang="rust">let resutl = task::spawn_blocking(|| my_slow_http_call()).await?; </code></pre> <p><a href=""><code>spawn_blocking</code></a> would execute the task using a dynamically sized thread pool dedicated to blocking tasks.</p> <p>While it's not the same as being able to run the same code with blocking/asynchronous runtime, mixing the two approaches is not too difficult. If you want to read more on the topic I suggest <a href="">the Tokio tutorial on bridging with sync code</a>.</p> How to not Write Emacs Config in Org 2021-01-30T00:00:00+00:00 2018-09-25T00:00:00Z <p>I have started simple. <code>~/.emacs.d/init.el</code> had just one line:</p> <pre><code>(org-babel-load-file &quot;~&#x2F;.emacs.d&#x2F;;) </code></pre> <p><code></code> was simple too:</p> <pre><code>#+begin_src emacs-lisp (setq org-directory &quot;~&#x2F;org&quot;) #+end_src </code></pre> <p>After restarting Emacs everything seems to have worked fine <code>C-h v</code> told me that the value of <code>org-directory</code> is indeed <code>~/org</code>. So far so good. I have opened <code>init.el</code> and its content was:</p> <pre><code>(setq org-directory &quot;~&#x2F;org&quot;) </code></pre> <p>That's right, the same as the code I had in the source block in <code></code>. Clumsy Emacs beginner, I thought, I must have written the wrong file! I have fixed it quickly to put <code>org-babel-load-file</code> in the right place and restarted Emacs. Right after start <code>*Warnings*</code> buffer popped up and showed me this error:</p> <pre><code>error: Recursive load, &#x2F;home&#x2F;raindev&#x2F;.emacs.d&#x2F;init.el, &#x2F;home&#x2F;raindev&#x2F;.emacs.d&#x2F;init.el, &#x2F;home&#x2F;raindev&#x2F;.emacs.d&#x2F;init.el, &#x2F;home&#x2F;raindev&#x2F;.emacs.d&#x2F;init.el, &#x2F;home&#x2F;raindev&#x2F;.emacs.d&#x2F;init.el </code></pre> <p>Whoa! My first line of configuration and somehow I broke Emacs already and made it load the config again and again. I searched the Internet for the error message in vain, in frustration went to #emacs and cried for help. I've been told that it looks like <code>org-babel-load-file</code> needs something that hasn't been initialized yet causing Emacs to attempt to load the config again. It makes sense, I thought. With some guidance (thanks pjb!) I made Emacs load the .org file 5 seconds later when init.el has already been processed:</p> <pre><code>(defun load-org-config () (org-babel-load-file &quot;~&#x2F;.emacs.d&#x2F;;)) (run-at-time 5 1 &#x27;load-org-config) </code></pre> <p>Once again, I restarted Emacs. This time, no error messages on startup. When I switched to <code>*Messages*</code> however, I saw an endless stream of:</p> <pre><code>Loading &#x2F;home&#x2F;raindev&#x2F;.emacs.d&#x2F;init.el (source)...done Loaded ~&#x2F;.emacs.d&#x2F;init.el Loading &#x2F;home&#x2F;raindev&#x2F;.emacs.d&#x2F;init.el (source)...done Loaded ~&#x2F;.emacs.d&#x2F;init.el Loading &#x2F;home&#x2F;raindev&#x2F;.emacs.d&#x2F;init.el (source)...done Loaded ~&#x2F;.emacs.d&#x2F;init.el ... </code></pre> <p>Hm, this is interesting, I thought. The problem happened <em>after</em> Emacs was fully initialized as well. I killed unresponsive Emacs and started it without loading the config using <code>emacs -q</code>. I hit <code>M-x</code> and used <code>eval-expression</code> to run <code>(org-babel-load-file &quot;~/.emacs.d/;)</code>. Not surprisingly, the same thing happened: Emacs started reloading the config endlessly.</p> <p>In despair, the next time I loaded just a random <code></code> file that didn't have any configuration or any source code whatsoever. I've got an error. But also something magical happened - a debugger popped up!</p> <p><img src="" alt="img" /></p> <p>I could see the chain of function calls that led to the error. Even more I could click on them and see the source code. Wow! This is amazing! I had the source code of Emacs' packages right in front of me.</p> <p>I started exploring. The error message was:</p> <pre><code>(file-missing &quot;Cannot open load file&quot; &quot;No such file or directory&quot; &quot;&#x2F;home&#x2F;raindev&#x2F;org&#x2F;projects.el&quot;) </code></pre> <p>That's strange, I thought. I have loaded <code></code> and not <code>projects.el</code>. Something is going on. Looking at the backtrace I could see <code>org-babel-load-file(&quot;~/org/;)</code> - that's the function I have called. The next call was <code>load-file(&quot;~/org/projects.el&quot;)</code> - .org has changed to .el. Something must have happened during the previous step. I clicked on <code>org-babel-load-file</code> and started reading the code:</p> <pre><code>This function exports the source code using `org-babel-tangle&#x27; and then loads the resulting file using `load-file&#x27; </code></pre> <p>And that's what happened a couple of lines below:</p> <pre><code>(let* ((tangled-file (concat (file-name-sans-extension file) &quot;.el&quot;) </code></pre> <p>So, the function exports the source code to a file with the same name but .el extension. Ah! As there was no code in <code></code> nothing was exported leading to <code>file-missing</code> error.</p> <p>Going back to the original problem with the new knowledge: why wasn't <code>init.el</code> replaced by the code from <code></code> then? Oh, wait, wasn't it? Remember the first time I loaded the Org config, I discovered I've accidentally written the config to <code>init.el</code> directly. I didn't!</p> <pre><code>;; Tangle only if the Org file is newer than the Elisp file </code></pre> <p><code>org-babel-load-file</code> did. As the comment says, the function only writes .el file if the .org file is newer. That's what happened the first time. Afterwards I kept tweaking <code>init.el</code> so it was always newer than <code></code>. But what else, when <code>org-babel-load-file</code> was executed Emacs thought <code>init.el</code> is the file that has the code exported from <code></code> and loaded it. In turn it has called the same function and entered the endless cycle.</p> <p><img src="" alt="img" /></p> <p>The solution was simple - give the Org config a different name:</p> <pre><code>(org-babel-load-file &quot;~&#x2F;.emacs.d&#x2F;;) </code></pre> <p>And that's it. <code></code> will get exported to <code>config.el</code> and loaded once.</p> Operability and Rust 2020-12-02T00:00:00+00:00 2018-09-25T00:00:00Z <p>Most of the discussions of programming languages focuses on development of software. Productivity of getting something up and running on one hand and maintenance costs on the other. Usually proponents of dynamic typing and interpreted languages are focused more on the speed of writing new code. The fans of static typing and compiled languages emphasize maintenance, the ability to change existing software<a id="ref1" class="ref" href="#1"><sup>1</sup></a>. This is obviously an oversimplification but in general summarizes my observations of the discussion.</p> <p>My intention is not to argue about merits of one or the other approach to development but to bring attention to a different point of view of programming languages, that of operability. The best kind of operations is no operations. Operating a system should be boring. I'm sure everyone would prefer spending a weekend with friends or family to fixing broken authentication, debugging slow API response times or trying to bring an unresponsive service to life. An operable system is one which makes the job of the people tasked with running it easy, the one which works as expected both in terms of logical behaviour (correctness) and responsiveness (performance)<a id="ref2" class="ref" href="#2"><sup>2</sup></a>.</p> <p>Those two values are traditionally in conflict. Picking one means sacrificing some of the other. With C/C++ (and manual memory management) for example that means achieving very high and predictable performance but accepting a risk segfaults, memory corruption and security vulnerabilities. Memory safe languages (e.g. Java, Python) do protect from those but on the other hand expose their users (and the people operating service written in them) to the complexity of garbage collection (GC), which is frequently a cause of tricky performance problems and unpleasant surprises. Languages that emphasize correctness and provide expressive type system which can be used to prevent bugs (e.g. Scala, Haskell) frequently make it easy to build constructs which are inefficient in non-obvious ways. Why an algorithm takes much more memory then expected and spending a lot of time on seemingly simple tasks is not the kind of questions you want to be answering during on-call duty.</p> <p>Rust possesses some properties which make it somewhat<a id="ref3" class="ref" href="#3"><sup>3</sup></a> uniquely attractive from the operational point of view. It aims for being memory safe without drawbacks of a GC, provides zero-cost abstractions and an expressive type system, has very explicit error handling. All those features aid building software that behaves correctly and perform predictably and consistenly. This is why I believe Rust has a lot of potential to become a de facto choice for building systems where reliability and performance are of cruicial importance, as the language stabilizes and the ecosystem matures. The additional up-front effort and a steep learning curve will pay off, considering the costs involved in the keeping the service running and evolving after it has been launched<a id="ref4" class="ref" href="#4"><sup>4</sup></a>.</p> <p id="1" class="footnote">1. In theory the scales of typing and ahead-of-time compilation are orthogonal but in practice they oftentimes go hand in hand.<a href="#ref1">⤣</a></p> <p id="2" class="footnote">2. I'm biased towards networked service and backend systems but the same properties are desirable in client applications as well.<a href="#ref2">⤣</a></p> <p id="3" class="footnote">3. Swift is similar in many ways while being more modest in what it's aiming to achieve.<a href="#ref3">⤣</a></p> <p id="4" class="footnote">4. Accordingly to the Google's <a href="">Site Reliability Engineering book</a>, 40 to 90% of the total costs of a system are incurrect after it has been built.<a href="#ref4">⤣</a></p> Why Intuition Works 2020-09-16T00:00:00+00:00 2018-09-25T00:00:00Z <p>It's easy to be sceptical of intuition. Making decisions unconsciously without understanding the reasons behind or being able to articulate the way we arrived at them. This seems irresponsible. On the other hand if intuition is something that's characteristic of us as species, there's a good chance it was acquired for a reason and has some value.</p> <p>If you are dismissive about the value of intuition, imagine just for a second why it <em>might</em> work. A plausible explanation is actually very simple: finding an answer is much easier than explaining how to arrive at the answer. If you have tried to explain a problem you are comfortable solving to someone else, you've probably have experienced this feeling yourself. Our brain can collect countless subtle cues about a problem and construct a way to obtain the result. Understanding of how we have arrived at the solution requires a lot of additional effort, reflective study of ourselves and analysis of our cognitive process. For this reason there are a lot more questions we can answer than we can explain.</p> <p>As with much of the human behaviour, to understand why we are wired in a certain way might require looking into the past. The fact that intuition is something we all share to a certain degree suggests that it is likely an evolutionary adaptation. Easy to imagine that through out the most of the history of our species, especially in the early days, making decisions was more important than being able to explain them. Natural selection punished indecisiveness. It was important to resolve whether to run or to fight in a split second. In fact more important than making the right call. Delaying a decision would mean an imminent death in many prehistoric situations.</p> <p>This leaves us with the fact that intuition can be useful in making a decision. Especially a time sensitive decision, as the tool specifically produced by the evolution for this kind of a situations. Of course it doesn't mean that we should be content with the tools inherited from our ancestors. One of the effect of the progress made by our civilization is that we no longer need to spend all of our time fighting for resources, giving us a chance to contemplate the question &quot;why&quot;.</p> Taking the First Step 2020-09-10T00:00:00+00:00 2018-09-25T00:00:00Z <p>I'll be upfront: the only reason for this post is to resume this blog. It's been almost two years since I have published the last article. And it's not for the lack of things to write about. I had too many ideas over this time. Not for the lack of effort either: there are a few drafts that never made it live. I can ponder about reasons why this has happened but it doesn't really matter. It you want to achieve something, you got to take the first step. It doesn't have to be perfect and you cannot wait for the ideal moment. The life is too short for that. Now go and take <em>your</em> first step.</p> Detecting Java OutOfMemoryError before it happens 2018-09-25T00:00:00+00:00 2018-09-25T00:00:00Z <p>Is it even possible, you might ask? Well, not really, we can't predict the future. But we <em>can</em> detect the situation leading to <code>OutOfMemoryError</code>, lack of free heap memory, before it actually occurs. Technically there're other causes of OutOfMemoryError <a href="">described in detail here</a> which are outside of the scope of the article and arguably less frequent and interesting.</p> <p>Before moving forward, why can't we handle <code>java.lang.OutOfMemoryError</code> when it actually happens? The error is a <code>Throwable</code>, so we can catch it, right? Not quite. While technically the error can be caught there're very few cases when it's actually useful to do so. Because Java is a garbage collected language allocations of objects on the heap happens all the time as a part of the normal program execution, including when an exception is caught (e.g. to record the stack trace information). When a program runs out of memory all bets are off: any thread of the program can die at any time. You no longer can count on the application being in a sane state. &quot;Impossible&quot; things can happen. Okay, but at least the error will get logged? Unfortunately, that's not always true. Again, if a program has run out of memory you can't count on any operation to succeed. Writing a log, as everything else, allocates memory and can fail. As well as simply writing a message to the standard error output. This makes the error hard to detect.</p> <p>If we can't do anything when an application runs out of memory, why should we care, can't we just let the application crash? One big reason is operability. Imagine getting an on-call alert about a web-service being slow. Or unresponsive. Or down. It's so much easier when you know that the service run out of memory rather than spending time in vain looking at the resource consumption graphs, garbage collector logs, application logs (hoping that a message about <code>OutOfMemoryError</code> was written to successfully).</p> <p>Is detecting when an application runs out of memory <em>really</em> that hard? There are already ways to monitor memory heap usage. So what's wrong with using that to detect when the program is running low on memory? Having all the memory used and having no memory available are different things in a garbage collected languages. At any point in time there will be objects in memory that are actually used alongside yet-to-be-collected garbage. <code>OutOfMemoryError</code> happens not when all the memory is used but when none can be claimed back after garbage collection.</p> <p>To be more precise, there're two different ways to trigger an out of out of memory error when running low on heap memory: to try to allocate at once more memory than is available and to spend most of the time on garbage collection without being able to claim back much memory. The first one will happen for example if a program does things like allocating a large array and is usually easier to detect. <code>OutOfMemoryError</code> will be thrown by the call that requests allocation of a large chunk of memory and hence is trivial to track down. Also failing to get the requested memory doesn't necessarily mean that the whole program is screwed: monitoring system and logs should still be available. In practice it's more common to deal with the second type of out of memory condition: a program spending a high percentage of time (e.g. 99%) doing garbage collection but being able to claim less than a very low amount (e.g. 1%) of memory back. In this case an application will gradually (sometimes quite fast) come to a halt brining in all the complications described in the paragraphs above.</p> <p>Is there any hope at all? Do not despair! There's a way to deal with the problem. And we don't even have to write a JVM agent in C (I'm sure that would be a fun exercise though). There're two quite different approaches to solve the problem of <code>OutOfMemoryError</code> detection. One is quite simple: it's possible to ask JVM to execute an external command with <code>-XX:OnOutOfMemoryError=&quot;&lt;shell command&gt;&quot;</code> flag. It can be any program or script that's called after the program has run out of memory that will send an alert, update a metric or perform some other action. Because the command will be executed outside of JVM as a separate process, the approach do not suffer from the same pitfalls as trying to deal with the condition in the Java program itself. While it might not be as convenient to have a separate program to track OOME, it is a viable solution.</p> <p>Wait, the title promised detecting OOME <em>before</em> it happens? Here comes the second approach. The idea is quite simple: instead of measuring memory usage we measure how much memory is still occupied right after a garbage collection was performed. While it's not easy to get the number directly there's a way to get a notification when a predefined threshold is exceeded. It's achieved with <a href="">Java management API</a> (also exposed via <a href="">JMX</a>).</p> <p>Before we continue it worth clarifying that in modern JVM all the heap memory is split into separate areas or &quot;memory pools&quot; for efficiency reasons. The pools are usually generational: created objects first end up in a pool for young generation, then survival generation eventually being placed into old or tenured generation if an object is still used after multiple collections. Split into generations and their number is dependent on a specific type of garbage collector used. When an OOME happens the tenured generation memory pool gets full. Why tenured pool specifically? Objects that are still needed move up in the hierarchy of generations until ending up in tenured generation. If an object is not used anymore it will be garbage collected and won't end up in the tenured memory pool (or will be removed from it). See <a href="">this great article</a> for explanations how JVM heap is organized depending on what garbage collector is used. <a href="">VisualVM</a> has an plugin called Visual GC which is an awesome way to see how a running application's heap looks like live.</p> <p>So we're interested in being notified about running low on space in tenured generation memory pool. An interface for interacting with a JVM memory pool is provided by <a href=""><code>MemoreyPoolMXBean</code></a>. The bean for the pool we're interested in can be obtained by filtering the result of <a href=""><code>ManagementFactory.getMemoryPoolMXBeans()</code></a>. Firstly we're interested in heap memory pools and secondly in the one that supports usage threshold. Usage threshold is only supported for the tenured generation memory pool, the reason given in the documentation is <a href="">efficiency</a>: young generation memory pools are intended for high frequency allocation of mostly short-lived objects and usage threshold has little meaning in this context. Without a further delay, below is the code to find the tenured generation <code>MemoryPoolMXBean</code>:</p> <pre data-lang="java" class="language-java "><code class="language-java" data-lang="java">MemoryPoolMXBean tenuredGen = ManagementFactory.getMemoryPoolMXBeans().stream() .filter(pool -&gt; pool.getType() == MemoryType.HEAP) .filter(MemoryPoolMXBean::isUsageThresholdSupported) .findFirst() .orElseThrow(() -&gt; new IllegalStateException( &quot;Can&#x27;t find tenured generation MemoryPoolMXBean&quot;)); </code></pre> <p>Now that we have access to the <code>MemoryPoolMXBean</code> setting a threshold for memory usage right after collection is simple:</p> <pre data-lang="java" class="language-java "><code class="language-java" data-lang="java">tenuredGen.setCollectionUsageThreshold(X); </code></pre> <p>X would be an absolute number in bytes. Note that size of a tenured memory pool is dependent on both heap and GC configuration so we need to set it to a value relative to a maximum size of the pool (the specific value of the threshold suitable for detection of out of memory situations will have to determined experimentally):</p> <pre data-lang="java" class="language-java "><code class="language-java" data-lang="java">double threshold = 0.99; MemoryUsage usage = memoryPoolMxBean.getUsage(); memoryPoolMxBean.setCollectionUsageThreshold((int)Math.floor(usage.getMax() * threshold)); </code></pre> <p>Now there are two ways to know if the threshold is exceeded: one is to poll a count with <a href=""><code>MemoryPoolMXBean.getCollectionUsageThresholdCount</code></a> and another is to subscribe to be notified every time the threshold is exceeded which is what's needed for our purpose:</p> <pre data-lang="java" class="language-java "><code class="language-java" data-lang="java">NotificationEmitter notificationEmitter = (NotificationEmitter) ManagementFactory.getMemoryMXBean(); notificationEmitter.addNotificationListener((notification, handback) -&gt; { if (MemoryNotificationInfo.MEMORY_COLLECTION_THRESHOLD_EXCEEDED .equals(notification.getType())) { &#x2F;&#x2F; Log, send an alert or whatever makes sense in your situation System.err.println(&quot;Running low on memory&quot;); } }, null, null); </code></pre> <p>So we've got a system in place to detect when a system approaches an out of memory error. There's a detail that needed to be dealt with for the solution to work correctly: JVM heap can grow and the tenured generation memory pool together with it making the set collection usage threshold incorrect. To mitigate the problem we can leverage memory pool usage threshold notifications which in themselves do not signify a problem as was explained above but will be triggered before collection threshold is exceeded. To set the threshold:</p> <pre data-lang="java" class="language-java "><code class="language-java" data-lang="java">memoryPoolMxBean.setUsageThreshold((int)Math.floor(usage.getMax() * threshold)); </code></pre> <p>The notification listener for the memory pool can be extended to handle <code>MEMORY_THRESHOLD_EXCEEDED</code> notification type and update the thresholds.</p> <p>The solution presented in the article is not perfect and it's important to understand its limitations. The two main ones I can think of are running out of memory early in the application startup before the heap monitoring is set up and <code>OutOfMemoryError</code> that is thrown when trying to allocate a large chunk of memory at once. The first one can be mitigated by making sure <code>LowHeapMemoryMonitor</code> is created early in the application life cycle. The second limitation can be hit when allocating a large array, for example. Both of the problems are usually possible to detect early on before the application is deployed to production. Another kind of issue possible to run into is when memory is consumed really fast: even if the notification about collection usage threshold exceeded is received the application can fail to react fast enough and run out of memory before the listener completes its work. If the action desired to take is not very quick and may require memory allocation on its own, like sending remote logs or an email, it might be wise to perform a low overhead operation first, e.g. write to <code>System.err</code>. In case you find the application to miss to take out of memory actions is might make sense to lower the collection threshold.</p> <p>Credits</p> <ul> <li> <p><a href="">the StackOverflow question about the issue</a></p> </li> <li> <p><a href="">the article describing the idea</a>. The solution presented in my article is basically the same but also handles dynamically growing heap.</p> </li> </ul> Granular Git Configuration 2018-04-16T00:00:00+00:00 2018-09-25T00:00:00Z <p>Even though in most cases having a single Git configuration is enough, sometimes more granular control is needed. Let's say you have a common Git configuration you use on your personal server, a laptop and a desktop. You probably want to share that configuration across the machines as part of your <a href="">dotfiles repository</a>. Also you have a work laptop and you need some special Git configuration for work projects. Occasionally you commit to your personal repositories or some open source repositories from the work laptop and you don't want to have the work configuration applied in those cases. Let's see how you can organize the Git configuration to match the described setup step by step.</p> <h2 id="global-configuration">Global configuration</h2> <p>Firstly, the configuration you share between all the machines will be Global Git configuration. It's stored in <code>~/.gitconfig</code> (can be modified using <code>git config --global</code> commands as well). This file will be the same everywhere and can be in your dotfiles repository. For simplicity let's say the configuration contains user name and email. In my case that would be:</p> <pre><code>[user] name = Andrew Barchuk email = [email protected] </code></pre> <h2 id="local-configuration">Local configuration</h2> <p>To have configuration specific to a particular machine you can include an addition local configuration file by adding the following to <code>.gitconfig</code>:</p> <pre><code>[include] path = ~&#x2F;.gittconfig.local </code></pre> <p>Now machine specific Git configuration can be added to <code>.gitconfig.local</code>. E.g. to use a different email on your work laptop:</p> <pre><code>[user] email = [email protected] </code></pre> <p>Any other configuration that should be different for a specific machine can be overwritten the same way.</p> <h2 id="conditional-include">Conditional include</h2> <p>The problem arises however if you still want to be able to commit to repositories not related to work using your ordinary email. Instead of overwriting the email on the work laptop globally it can be done only for repositories located in a specific directory where work projects are kept, say <code>~/work</code>. <code>includeIf</code> can do exactly what we need (see <code>man 1 git-config</code> for more details). Here's how .gitconfig.local will look like:</p> <pre><code>[includeIf &quot;gitdir:~&#x2F;work&#x2F;&quot;] path = ~&#x2F; </code></pre> <p>The email and other work-specific configuration will be placed in <code>~/</code>. Note that having a trailing <code>/</code> after the directory name is important. Now Git won't apply the work-related configuration to your personal dotfiles repository in <code>~/dotfiles/</code> but will do that for <code>~/work/webapp/</code>.</p> <h2 id="bonus-repository-specific-configuration">Bonus: repository specific configuration</h2> <p>If for some reason you need to change Git configuration only for a single repository for some reason it can be done by editing <code>.git/config</code> file or simply with <code>git config</code> (no <code>--global</code> flag this time).</p> <p>If you're curious about my dotfiles feel free to check out <a href="">the GitHub repository</a>.</p> Build Yourself Arch Linux, Part 3 2017-10-11T00:00:00+00:00 2018-09-25T00:00:00Z <h1 id="part-3-let-s-get-a-gui">Part 3: Let's Get a GUI</h1> <p>This is the third and the final part of my Build Yourself Arch Linux series (<a href="">part 1</a>, <a href="">part 2</a>). In this part I'll finally get to a graphical environment setup.</p> <h2 id="gnome">GNOME</h2> <p>Before settling down on <a href="">GNOME</a> I've tried (well installed and played around for 10 minutes) most of <a href="">the desktop environments supported by Arch</a>. I've been using GNOME 3 in the past but decided to look what else is out there. The reason I've settled on GNOME now is HiDPI support. While there're other desktop environments that support HiDPI GNOME gave me the best result with pretty much no configuration. I was really tempted by <a href="">KDE Plasma</a> which looks gorgeous and does support HiDPI. Still on MacBook's screen GNOME was a bit more consistent: I've got too small icons here and there in Plasma. <a href="">LXQT</a> is another DE I'm interested in. Given the progress towards HiDPI support or possibility of getting an external monitor with ordinary resolution I'll probably be able to reevaluate my choice of desktop environment soon enough.</p> <h3 id="installation">Installation</h3> <p>Installation of GNOME was quite an easy task: I went for &quot;minimal&quot; <code>gnome-shell</code> package which is around 750 MB of dependencies in total (instead of around 1500 MB for <code>gnome</code> and 2030 MB for <code>gnome-extra</code>). As I was going to use <a href="">Wayland</a>, I had to get <a href="">XWayland</a> separately: <code>xorg-server-xwayland</code> package wasn't pulled in as a dependency that led to <code>gnome-shell[600]: Failed to spawn Xwayland: Failed to execute child process &quot;/usr/bin/Xwayland&quot; (No such file or directory)</code> when starting GNOME.</p> <h3 id="startup">Startup</h3> <p>Because I still do some stuff in the text console, I have created a tiny script to easily start GNOME under Wayland manually as described on <a href="">ArchWiki</a>. I've tried to use <a href="">GDM</a> first but it's not really needed in my setup. E.g. it doesn't make sense to type both disk encryption password and user's password to boot and if I'm not going to select different users/sessions why have a display manager in the first place? When starting GNOME I see couple of error messages <code>Activated service 'org.freedesktop.systemd1' failed: Process org.freedesktop.systemd1 exited with status 1</code> that doesn't seem to be critical, you can find a related discussion on <a href="">GitHub</a>.</p> <h3 id="workman-layout">Workman layout</h3> <p>I can't do much on a computer without a Workman keyboard layout. Fortunately it was installed already as part of <code>xkeyboard-config</code>, pulled in by GNOME. The only thing I had to do it to remap Caps Lock to Control which means editing <code>workman</code> section in <code>/usr/share/X11/xkb/symbols/us</code> and replacing <code>key &lt;CAPS&gt;</code> mapping with <code>{ [ Control_L ] };</code> (there's probably a better way to <em>override</em> the keymap configuration instead to prevent the modification from being erased by an upgrade of the package).</p> <h3 id="disable-cursor-blinking">Disable cursor blinking</h3> <p>As you might already know from the previous part of the guide I don't particularly enjoy cursor flickering. Fortunately it is possible to disable it system-wide for GUI applications in GNOME: <code>gsettings set org.gnome.desktop.interface cursor-blink true</code>.</p> <h3 id="more-gnome-apps">More GNOME apps</h3> <p>After getting GNOME working I've installed <code>gnome-controll-center</code> which gives access to graphical preferences. The package asks you to select on of <code>libx264</code> and <code>libx264-10bit</code> dependencies, see <a href="">the Reddit post</a> why you probably want non 10bit version. To be able to change desktop backgrounds I've also grabbed <code>gnome-backgrounds</code> package.</p> <p>For a graphical file manager I've installed <code>nautilus</code> - the default one in GNOME, plus <code>sushi</code> which gives a preview of a selected file when hitting space, a shortcut I used to have from macOS. Some other GNOME packages I have installed: <code>gnome-documents</code> - to read books (e.g. EPUB); <code>gnome-calculator</code> - gives ability to use GNOME Overview search as calculator as well; <code>gnome-dictionary</code> - to look up definitions, analogue of macOS built-in dictionary; <code>gnome-keyring</code> - system-wide secret storage; <code>gnome-screenshot</code> - a really great tool for taking screenshots; <code>tracker</code> - to search for files from the Overview; <code>gnome-clocks</code> and <code>gnome-weather</code> - to get multiple clocks and a forecast in the calendar drop-down respectively; <code>gnome-maps</code> - a pretty good OpenStreetMap based application; <code>gnome-tweak-tool</code> - to have access to more detailed graphical configuration.</p> <h3 id="extensions">Extensions</h3> <p><code>gnome-shell-extensions</code> is a bundle of default GNOME extensions from which I use WindowsNavigator to switch between windows in Overview using a keyboard. See <a href="">the ArchWiki</a> on details how to enable extension management using <a href=""></a>.</p> <h3 id="hide-unwanted-desktop-icons">Hide unwanted desktop icons</h3> <p>To get rid of icons of applications installed as dependencies that you don't use add <code>NoDisplay=True</code> in .desktop file. Copy the desktop file to <code>~/.local/share/applications</code> to not have it overwritten by every application update.</p> <h3 id="not-a-perfect-story">Not a perfect story</h3> <p>One thing I don't like about GNOME so much is that it's very monolithic (I'm not sure if described below are the problems of GNOME itself or the Arch packages specifically though). It expects you to install full suite of applications along with Gnome Shell itself (given number of error messages in logs related to not found <a href="">D-Bus</a> services for GNOME applications I've opted to not install. On one hand if you install only the shell you'll end up with non functional GUI elements (desktop background selection, settings button), on the other hand if you install <code>gnome-controll-center</code> Cheese webcam application gets pulled in (which is fine as I use it to test my webcam before calls but annoying nevertheless).</p> <h2 id="terminal-emulator">Terminal emulator</h2> <p><a href="">Tilix</a> is a more future reach alternative to GNOME Terminal and a quite close rival to macOS' iTerm. It's quite young project, there's no Tilix package in the official repositories yet. The AUR package works well, except you'll have to recompile Tilix when its dependency <code>gtkd</code> is updated otherwise it will fail to launch. I've enabled &quot;Run command as a login shell&quot; in Tilix profile configuration configuration (section Command) to enable shell integration (like opening new tabs in the same working directory).</p> <p>Not tied to specific terminal emulator, but to make shell more pleasant to use I've installed <code>bash-completion</code>.</p> <h2 id="building-packages">Building packages</h2> <p>When you install AUR packages or rebuild official packages from source <code>xz</code> utility is used to get smaller package size at expense of build time. Compression step takes a significant portion of build time and by using multiple threads as described <a href="">here</a> you can reduce that time significantly. The same way you can speed up compilation of packages by <a href="">using multiple threads</a>.</p> <p>You can also build (slightly) faster binaries when compiling from source by sacrificing portability which doesn't matter if you run built packages only on your own machine. To use the instruction set of your specific CPU by add <code>-mnative</code> to the compiler flags (<code>CFLAGS</code> and <code>CXXFLAGS</code> in <code>/etc/makepkg.conf</code>). See <a href="">ArchWiki</a> for more information about package optimization.</p> <h2 id="browser">Browser</h2> <p>Not much to say here really. Firefox works great on Linux and with the recent improvements to speed and stability in versions 55-57 I've got no reasons to look elsewhere.</p> <h2 id="webcam">Webcam</h2> <p>There's <a href="">an ongoing effort</a> to provide a Linux driver for the FaceTime camera. To get it working just get <code>bcwc-pcie-git</code> from AUR. To test the lighting before video calls <code>cheese</code> program from GNOME works well.</p> <h2 id="graphics-drivers">Graphics drivers</h2> <p>Fortunately I have only an integrated Intel GPU which has very good Linux support. Following <a href="">the ArchWiki article</a> I've installed <code>xf86-video-intel</code> to get 2D hardware acceleration and <code>vulkan-intel</code> to have <a href="">Vulkan API</a> support; mesa for 3D acceleration was pulled in by GNOME already. There're two <a href="">APIs for video hardware acceleration</a> on Linux: VA-API and VDPAU developed by Intel and Nvidia respectively. To enable VA-API I've installed <code>libva-intel-driver</code> and to verify that it's indeed available <code>vainfo</code> from <code>libva-utils</code> (from AUR). I didn't get VDPAU to work under Wayland (even though it worked out of the box under Xorg). This is not critical in my case since my video player of choice <a href="">mpv</a> supports VA-API. To avoid screen flickering multiple times during the boot I've enabled early loading of i915 kernel module (Intel graphics) as described <a href="">here</a>. The downside is the high brightness during early boot until disk password is entered.</p> <h2 id="touchpad">Touchpad</h2> <p>At first I've tried <code>xf86-input-mtrack</code> touchpad driver to get the beloved 3 finger drag gesture that I used to on macOS so much. But the driver is Xorg only meaning I won't be able to keep it moving to Wayland. I've did some research and basically the gesture is not coming to Wayland anytime soon (see <a href="">my Reddit post</a>). After some frustration I've gave up on the idea, decided to unlearn the gesture and stuck with Wayland.</p> <p>The only tweaking I did in the result was to enable tap to click, increase touchpad speed and enable natural scrolling via GNOME Settings.</p> <h2 id="text-editor">Text editor</h2> <p>I've replaced vim with gVim (one of the reasons being vim is compiled without +clipboard, meaning no clipboard access). Of course command line vim binary is included in the package as well. Because gVim's icon looks ugly on high resolution displays I've replaced it with the one from <a href="">VimR project</a>. To do it change <code>Icon</code> value in gVim's .desktop file to the path to the new image (copy desktop file first to prevent overwrites as mentioned <a href="">above</a>).</p> <h2 id="video-player">Video player</h2> <p>I've settled on using <a href="">mpv</a> which is very simple and powerful at the same time and has a nice command line interface. To enable hardware acceleration (which reduces CPU use significantly) two lines with <code>hwdec=vaapi</code> (use VA-API supported by my Intel GPU) and <code>opengl-backend=wayland</code> (make driver detection work properly under Wayland) in <code>~/.config/mpv/mpv.conf</code> is all I needed to do.</p> <h2 id="power-saving">Power saving</h2> <p>Even though I probably didn't get to 100% of the battery life I had with macOS I've came pretty close without spending too much time on it. First I've installed <a href=""><code>powertop</code></a> which gives an interactive overview of power usage on a system and also allows to apply predefined set of rules to save energy. As described by the wiki I've created a systemd service to automatically apply powertop's suggestions on system startup. <a href="">TLP</a> is another tool to save some battery life, see the installation section of the page on what systemd services it needs to have enabled/disabled. During experiments with power saving my Wi-Fi card got <a href="">hardware blocked</a> once which wasn't easy to troubleshoot as the MacBook have no indicator for the switch but was simple to fix with <code>rfkill</code> utility. Following the <a href=",x#Powersave">ArchWiki power saving recommendations for my MacBook</a> I've disabled card reader as it was showing up in powertop and I don't use the device.</p> <h2 id="conclusions">Conclusions</h2> <p>It's been a long time (almost a year since the first post!) and an exciting journey, I have learned a lot. I'm not looking back at all really. While in the beginning it was overwhelming to learn so much stuff at once now I feel as much productive as on macOS (if not more). Not having to deal with App Store, iTunes, million ways to update installed programs and slow macOS updates is great.</p> <p>Hope the series were useful to you. If you have any feedback, please <a href="">send it to me</a>.</p> How to Partition an External Hard Drive for macOS 2017-06-15T00:00:00+00:00 2018-09-25T00:00:00Z <p>TL;DR macOS expects 200 MB EFI System partition in the beginning of a hard drive, don't like unformatted partitions and creates 128 MB Apple boot partitions after each real partition whenever you format it.</p> <p>I needed to partition an external hard drive to be usable on macOS (for <a href="">Time Machine</a> backups). I've partitioned the drive using <a href="">GPT</a> scheme and created one unformatted Apple HFS/HFS+ partition on Arch Linux using <a href="">fdisk</a>. It was not recognized (the inserted disk is not readable dialogue), nor was I able to format the partition (with &quot;Media kit reports not enough space on device&quot; error) which was shown as full in the Disk Utility. When I have formatted the partition (using mkfs.hfsplus from <a href="">hfsprogs</a>) the partition was recognized but I was unable to initialize it for Time Machine or format with the same error as before. Finally I have partitioned the drive from macOS and created a new partition. Looking at the drive with fdisk I've discovered that EFI System partition was created in the beginning of the disk with size of 200MB. When I have recreated the same partitioning layout using fdisk it was recognized properly. After setting up Time Machine backups on the created partition I have inspected the disk once more. The partition type was changed to Apple core storage and a 128 MB partition was created after it. When I created another partition for the second MacBook I got one more 125 MB Apple boot partition. The resulting partition table was:</p> <pre><code>Device Start End Sectors Size Type &#x2F;dev&#x2F;sdb1 2048 411647 409600 200M EFI System &#x2F;dev&#x2F;sdb2 411648 268847103 268435456 128G Apple Core storage &#x2F;dev&#x2F;sdb3 268847104 269109247 262144 128M Apple boot &#x2F;dev&#x2F;sdb4 269109248 478824447 209715200 100G Apple Core storage &#x2F;dev&#x2F;sdb5 478824448 479086591 262144 128M Apple boot </code></pre> <p>So 5 partitions instead of 2 I actually needed. <a href="">Here</a> Apple's partitioning policy can be found describing when additional partitions are created with some justifications trying to answer why they are needed.</p> What's Wrong with WhatsApp Message Tunneling 2017-05-07T00:00:00+00:00 2018-09-25T00:00:00Z <p>A colleague asked me on Twitter what the problems do I have with the way WhatsApp tunnels all the messages though my phone:</p> <blockquote> <p><a href="">@raindev_</a>: I'm tired of WhatsApp tunneling all the messages through my phone. Time to look for an alternative?</p> </blockquote> <blockquote> <p><a href="">@JensRantil</a>: Also, what part about the tunneling do you find annoying? Very rarely don't I have my phone on same Wi-Fi as computer.</p> </blockquote> <p>The answer turned out to be too long for Twitter so I decided to write a short post.</p> <p>Just to clarify: it's not required for the phone to be on the same Wi-Fi network, being connected to the Internet is enough.</p> <p>While that's true that most of the time my phone is connected, I have a few issue with this approach.</p> <ol> <li> <p>I doesn't work truly reliably. Sometimes I need to wake up my phone to connect/reconnect to WhatsApp from the desktop.</p> </li> <li> <p>It drains phone battery unnecessarily. Especially if I'm connected to WhatsApp the whole day, e.g. at an office. While the impact may not be huge really, I don't see a reason to sacrifice part of already scarce phone battery life.</p> </li> <li> <p>My phone isn't really <em>always</em> connected. I don't want to fall offline if I forgot a charger/to charge/or my phone altogether. Made worse by the fact that I need to scan a barcode from the phone to connect which is awkward to do remotely.</p> </li> <li> <p>Because all the messages have to be tunneled though my specific device, I can't use WhatsApp on two different phones at the same time. Yes, I do use two phones regularly.</p> </li> <li> <p>If phone is offline, there's no access to old messages as all of them are stored on the phone only.</p> </li> <li> <p>Also, while not directly caused by the tunneling, but tied together with the decision to do authentication by phone number: impossible to log out a device remotely, I'd better not loose my phone; lack of iPad app, is probably a result of the same decision line.</p> </li> </ol> <p>While all of the issues above are not critical (in fact I've used WhatsApp as my main messenger successfully for more than a year), they makes me look at alternatives time to time (the current contender is <a href="">Wire</a>).</p> Build Yourself Arch Linux, Part 2 2017-04-02T00:00:00+00:00 2018-09-25T00:00:00Z <h1 id="part-2-getting-work-done-from-console">Part 2: Getting Work Done from Console</h1> <p>This is the second part of the series of articles (<a href="">part 1</a>, <a href="">part 3</a>) about setting up Arch Linux on my MacBook. The main goal of this part is to make the installation actually useful to do some work. In some sense, I want to bootstrap the series to be able to work on the posts under Linux. Disclaimer: I won't get to setting up graphical environment in this part; while it's possible to do everything from this article <em>after</em> installing a graphical environment, I've decided to try how far can I get without one.</p> <h2 id="unprivileged-user">Unprivileged user</h2> <p>It is <a href="">not a good idea</a> to use a computer as root user all the time. <a href="">Here</a> is how to create a new unprivileged user and set its password. Beware that a user's home directory is created from a template called skeleton and the default skeleton (<code>/etc/skel</code>) contains only a few basic dotfiles templates like <code>.bashrc</code>. It may be confusing if you used to see bunch of predefined media directories in your home catalogue.</p> <h3 id="sudo">sudo</h3> <p><a href="">sudo</a> is another recommended security measure. It is not pre-installed, you'll need to get it with <code>pacman</code>. I used group <code>wheel</code> to give access to <code>sudo</code>. The newly created user needs to be added the group. <a href=""><code>visudo</code></a> is the way to control access to <code>sudo</code> command. Besides giving <code>wheel</code> members privileged access I also allowed reboot/shutdown to be performed without need for the password. To do that while editing <code>/etc/sudoers</code> I have uncommented the line with <code>Cmd_Alias</code> for <code>REBOOT</code> and added the following line:<code>raindev ALL= NOPASSWD: REBOOT</code>. Now I can run <code>sudo poweroff</code> to shutdown my laptop and won't be asked for the password. The same way user can be given a permission to execute other commands requiring privilege escalation without a password.</p> <h3 id="polkit">polkit</h3> <p>Another option to access power management without need for a password is <a href=""><code>polkit</code></a> which is a framework for granular access management. Given <code>polkit</code> is installed and running (I had to reboot before <code>systemctl start polkit</code> succeeded), you'll be able to use <code>systemctl reboot/poweroff/suspend</code> without password given that there're no other users logged in.</p> <h3 id="automatic-login">Automatic login</h3> <p>Considering that disk encryption password is required to boot there's no point in having to enter user's password every time to log in. The program that manages virtual terminals and their access is called getty. Read <a href="">ArchWiki</a> for how to configure automatic login for getty.</p> <h2 id="unplugging-the-laptop">Unplugging the laptop</h2> <h3 id="wi-fi">Wi-Fi</h3> <p>To get Wi-Fi working a driver is needed. The problem is my Wi-Fi card (Broadcom BCM4360) is not supported by the kernel itself and there's no driver available from official Arch Linux repositories. Good news are that almost any package you may need is available from <a href="">Arch User Repository</a> if not found in the official repositories. <a href=""><code>broadcom-wl-dkms</code></a> is the package we need (update: has been moved into the official community repository). It's a good idea to install <a href="">DKMS</a> version of the driver to not have to recompile it manually after each kernel upgrade. You'll need to get <code>dkms</code> package before installing the driver. There're two ways of installing AUR packages: <a href="">manual</a> or using <a href="">AUR helpers</a>. I use <a href="">pacaur</a> but installing packages manually is not that hard and inspecting package files can provide useful insights when there're some problems. Do not forget to install <code>base-devel</code> package group as the build will fail without it.</p> <p>When the wireless driver is installed the only thing left is to get <code>dialog</code> and <code>wpa_supplicant</code> packages which are needed to be able to use <code>wifi-menu</code> tool. Connecting to wireless networks is dead simple: <code>sudo wifi-menu</code>. Give the profile meaningful name, it will be possible to quickly activate it using <code>sudo netctl start &lt;profile&gt;</code>.</p> <h3 id="brightness-and-battery-level">Brightness and battery level</h3> <p>Besides Wi-Fi two things where bothering me while working off the table: how to change screen brightness and check battery charge. The answer is using files :) After some exploration in <code>/sys/class</code> I've found <code>/sys/class/backlight/intel_backlight/brightness</code> (or <code>/sys/class/backlight/acpi_video0/brightness</code> if you prefer percentage instead of absolute brightness value) and <code>/sys/class/power_supply/BAT0/capacity</code>. They are just files, you can read them and write to them (battery charge is obviously read-only). Don't be afraid to look into your <code>/sys/class</code>, there're lots of interesting stuff. You can enable adjustment of brightness without need for a password the same way as done for power management above using <code>visudo</code>. E.g. I use <code>cat 200 &gt; sudo tee /sys/class/backlight/intel_backlight/brightness</code> (notice placement of <code>sudo</code>), therefore <code>NOPASSWD</code> should be set for <code>tee /sys/class/backlight/intel_backlight/brightness</code>.</p> <h3 id="color-scheme-and-cursor">Color scheme and cursor</h3> <p>On the laptop's screen I have found colors to be hard to read (especially in command line browsers). I have found Solarized color scheme adapted for Linux virtual console <a href="">here</a>. It made the terminal much more comfortable for the eyes. While at console configuration, I have also changed cursor to not blink and to be a nice solid light grey block. Read <a href="">this StackOverflow answer</a> for a quick solution (I have used <code>16;0;224</code> cursor configuration values); read man page for <code>terminfo</code> for details.</p> <h2 id="blogging-toolchain">Blogging toolchain</h2> <p>The first thing I have decided to configure my new Arch Linux installation for is blogging. I use <a href="">Hakyll</a> static website generator and host my blog using <a href="">GitHub Pages</a>.</p> <h3 id="web-browser-w3m">Web browser: w3m</h3> <p>To work on my blog I need a web browser obviously. Out of curiosity I have decide to see how far can I get using a terminal browser. I have tried three options: <a href="">Lynx</a>, <a href="">ELinks</a> and <a href="">w3m</a>, and settled on the third one. Lynx feels the simplest one of the three. It would be fine but pages took ages to load for some reason. ELinks is really nice and full featured but didn't worked well with suspending to background and resuming (Ctrl-Z and <code>fg</code> respectively). The program froze after resuming (if any other command was run after suspending it) which breaks my command line workflow. w3m works great for me: loads pages really fast, works seamlessly with backgrounding/foregrounding, even has support for multiple tabs and bookmarks. It is not the most user friendly program but I'll get used to it.</p> <h3 id="text-editor-vim">Text editor: Vim</h3> <p>Vim is my text editor of choice. While installation itself is dead simple, there're few caveats. To use Vim in all the places where text editor is needed set <code>VISUAL</code> and <code>EDITOR</code> <a href="">environment variables</a> (e.g. <code>export VISUAL=/usr/bin/vim</code> in <code>~/.bashrc</code>). There're also special cases as well. <code>visudo</code> doesn't use the environment variables to determine text editor by default out of security concerns. Add <code>Defaults editor=/usr/bin/vim</code> to <code>sudoers</code> file to use Vim. w3m uses an external editor for text field input. To make it pick up <code>EDITOR</code> environment variable you'll need to go to its configuration by pressing <code>o</code> and clear <code>Editor</code> configuration field. Now it should be safe to remove <code>vi</code> package.</p> <h3 id="terminal-multiplexer-gnu-screen">Terminal multiplexer: GNU screen</h3> <p>To be able to quickly switch between w3m, Vim and shell I use <a href="">GNU screen</a>. I had two issues with screen: default command prefix Ctrl-A is clashing with the bash movement to get to the beginning of the line and screen is flashing in place where you'd usually hear bell sound in a graphical terminal emulator (e.g, backspacing en empty line). The fixes are accordingly <code>escape ^Jj</code> (changes command prefix to Ctrl-J) and <code>vbell off</code> in <code>~/.screenrc</code>. Because of the way console auto login is set up it's not possible to lock tty1 by exiting it. GNU screen's lock can be used instead (x in the default keybinding). In the future I'd like to try tmux instead of screen but I'm not up for learning and configuring it now.</p> <p>Using multiple virtual terminals is a simple alternative option to a terminal multiplexer. To switch between terminals use <code>Alt</code> + right/left arrow. To scroll back Shift-PageDown/Shift-PageUp (see <a href="">here</a> for more details). Even if you do use a terminal multiplexer there're boot messages in console (printed before screen for example is launched) you might want to scroll back and read. To not lose any boot messages I've added <code>fbcon=scrollback:512k</code> to the kernel boot options in <a href="">systemd-boot entry configuration</a> which expands virtual terminal buffer.</p> <p>Hint: if your virtual terminal appears to be stuck, it might be that you unintentionally paused console output by <code>Ctrl-S</code>, to resume use <code>Ctrl-Q</code>.</p> <h3 id="version-control-git-and-ssh">Version control: Git and SSH</h3> <p>To get access to source of my blog on GitHub and to be able to publish updates I need Git. There're no issues to obtain Git itself using pacman. I use SSH keys to authorize access to my GitHub account without entering password every time. To get it working an SSH client is needed and a key needs to be generated (I usually don't reuse SSH keys across environments) and added to GitHub. To get an SSH client I have installed <code>openssh</code> package. See <a href="">Git documentation</a> and <a href="">GitHub guide</a> how to generate an SSH key and add it to GitHub.</p> <h3 id="gpg">GPG</h3> <p>I use <a href="">GPG</a> to <a href="">sign my Git commits</a> which is already installed as part of <code>base</code> package group. There're a few caveats to consider however. By default GPG uses a graphical password prompt provided by <code>pinentry</code> program, see <a href="">here</a> how to change it curses-based version that works in console. It is also required to set <code>GPG_TTY</code> environment variable to <code>$(tty)</code> for <code>pinentry-curses</code> to work properly. You might need to <a href="">restart GPG agent</a> for the changes to have effect. See <a href="">this gist</a> (Method 2) on instructions how to move the GPG keys to your new environment. If you want to wipe the USB stick used to copy the key it can be done using <a href=""><code>shred</code> command</a>.</p> <h3 id="static-blog-generator-hakyll">Static blog generator: Hakyll</h3> <p>To build my Hakyll blog generator all I needed were <code>ghc</code> and <code>stack</code> packages. No additional configuration were needed. All of the post above was already written under Arch Linux in virtual console.</p> <h2 id="fix-meta-key-workman-keyboard-layout-specific">Fix Meta key (Workman keyboard layout specific)</h2> <p>I rely heavily on bash movement commands. E.g. M-b/M-f to move by words backward and forward. &quot;M-&quot; prefix stands for &quot;Meta&quot; which corresponds to Alt on contemporary keyboards (same as Option on Apple's keyboard). In terminals Meta modifier is usually sent as Escape preceding a letter. Because of this fact it makes no difference in terminal to press Alt-b or Escape followed by b. For some reason bash movements worked properly with Escape key, but not with Alt keys. First I suspected that Alt was not configured properly to send escape prefix. <code>setmetamode</code> command revealed that Meta is in escape prefix mode. Than I noticed that Meta-b behaves exactly as Meta-t and Meta-f like Meta-u. In Workman layout key t is mapped to letter b, the same for u and f. Given the fact I concluded that something is wrong with my Workman keymap. Indeed Alt modifiers weren't mapped properly and used definitions from QWERTY layout. It was quite hard to find proper documentation about kbd keymaps online. <a href="">ArchWiki article</a> is a pretty good overview and all the details are available in keymaps man page which is written very thoroughly. You can find <a href="">the patch</a> fixing Meta characters on my GitHub.</p> <h2 id="disable-macbook-startup-sound">Disable MacBook startup sound</h2> <p>OS X stores audio volume level in an EFI variable that is used during early boot to produce appropriately loud annoying sound. EFI variables are already mounted as files under <code>/sys/firmware/efi/efivars/</code>. To disable startup sound from Linux you'll need to set the volume to zero. See ArchWiki <a href="">instructions</a>. For safety reason most of the variables are immutable. To be able to rewrite the volume do <code>chattr -i /sys/firmware/efi/efivars/SystemAudioVolume-7c436110-ab2a-4bbb-a880-fe41995c9f82</code> as <code>root</code>. You can install <code>efivar</code> to check the value of the EFI variable. It won't harm to change the modified variable back to immutable (<code>chattr +i</code>).</p> <h2 id="suspend-issues">Suspend issues</h2> <p>While closing lid to suspend worked well out of box, with <code>systemctl suspend</code> the laptop was brought to sleep only for a few seconds and than waked up. This is not new problem, I've found a solution on <a href="">ArchWiki</a>. <code>cat /proc/acpi/wakeup</code> will give you a list of devices allowed to wake up the machine. To prevent a device from waking up the system write a device name from the first column to the file (e.g. <code>echo LID0 | tee /proc/acpi/wakeup</code> for lid). In my case it was the lid and not a USB controller as mentioned on the wiki causing issues. I haven't figured a way to create an <code>udev</code> rule for the laptop lid so I created a simple systemd unit to disable it during startup (see <a href="">this StackOverflow answer</a>).</p> <p>The drawback is that closing lid to suspend will no longer work. If you want to use both lid and <code>systemd</code> to suspend, see <a href="">this Arch Linux Forum post</a> to toggle LID wakeup on the fly only for suspend (I haven't tried it). To mitigate the problem I've assigned my power button to suspend laptop instead of shutting it down, instructions are available <a href="">here</a>. Bonus point is that I won't unintentionally halt the laptop by hitting the power button accidentally.</p> <h2 id="network-autoconfiguration">Network autoconfiguration</h2> <p>I have noticed (using <code>juornalctl -p3 -b</code>) that there's a timeout error when trying to start Ethernet adapter device (<code>sys-subsystem-net-devices-enp0s20u1.device</code>). Using <code>systemctl list-dependencies --reverse sys-subsystem-net-devices-enp0s20u1.device</code> I have figured out that it's <code>dhcpcd</code> triggering initialization of the missing device. It kind of makes sense as I do not have it plugged in when working of the battery and have <code>[email protected]</code> enabled (see <a href="">first part of the guide</a>). At first I have added a condition <code>ConditionPathExists=/sys/class/net/%i</code> to <code>dhcpcd</code> systemd unit file (<code>/usr/lib/systemd/system/[email protected]</code>) to only start it if the network device is present. Later on I have decided that I don't want network connection to be established until started explicitly and disabled the <code>dhcpcd</code> service. The changes done to the dhcpcd systemd unit in the first part of the tutorial can also be reverted (by reinstalling the package, e.g.).</p> <h2 id="backup">Backup</h2> <p>Following <a href="">the Arch Wiki recommendations</a> I have set up a basic backup to a USB flash stick. Backup includes a few top level directories (<code>/boot</code>, <code>/etc</code>, <code>/home</code>, <code>/data</code>, <code>/usr/local</code>, <code>/var</code>) copied over using <a href=""><code>rsync</code></a>, lists of native and foreign (coming from outside of Arch Linux core package repositories, e.g. from AUR) pacman packages and LUKS-encrypted partition header. There's still plenty of room for improvements in my backup scheme: encryption, more granular or more complete file backup, backup to multiple places (now I can loose a backpack with both my laptop and the backup USB stick), backup scheduling, automated restore to name a few.</p> <p>To ease mounting of the backup USB stick I've added the following line to <code>/etc/fstab</code>:</p> <p><code>UUID=8e713409-6935-4206-9476-067df8dee417 /mnt/aquamarine ext4 user,rw,noauto,nodev,nosuid,noexec 0 0</code></p> <p><code>user</code> option gives not root user permissions to mount the file system (see <code>man 8 mount</code>). <code>nodev,nosuid,noexec</code> described below in <a href="">the security section</a>. <code>noauto</code> disables automatic mounting. Now <code>mount /mnt/aquamarine</code> doesn't require privileges escalation.</p> <h2 id="package-management">Package management</h2> <h3 id="pacman-mirrors">pacman mirrors</h3> <p>See <a href="">instructions</a> how to rank pacman repository mirrors by speed. After obtaining list of fastest mirrors I've excluded all the mirrors that are not 100% synced accordingly to the <a href="">Mirror Status</a> page. Also I have removed all the http sources. I have no doubts about Arch <a href="">package signing practices</a> but I don't like leaking my package usage habit in plain text. It might make an intruder's job easier if he will know exactly what versions of what software am I running and when do I update it.</p> <h3 id="pacman-cleanup">pacman cleanup</h3> <p>While I was experimenting with different packages some of transitive dependencies might be left behind. Fortunately, pacman remembers the reason for package installation. To query all the packages installed previously as dependencies do <code>pacman -Qdt</code>.</p> <p>By default pacman will keep all the packages that were installed at some point of time. It makes sense to clean them up sometimes. There's a handy script for exactly this task called <a href=""><code>paccache</code></a>. I'm not low on disk by any means, it doesn't make sense for me to clean the cache aggressively. It's there for a reason after all (e.g. I may need to downgrade a package which is much easier having it in the cache). It's possible to clean up only uninstalled packages, that's what I did. The trick is you'll have to tell <code>paccache</code> to keep 0 versions of package (see <code>-h</code> on details) otherwise it will not remove all the uninstalled package versions.</p> <h2 id="security">Security</h2> <p>Following <a href="">security recommendations</a> I have added <code>nodev</code> and <code>nosuid</code> mount options to <code>/etc/fstab</code> for <code>/var</code>, <code>/home</code>, <code>/data</code> and <code>/boot</code>. The idea is that those file systems have not a need to expose physical devices (<code>nodev</code>) nor to escalate permissions to the binary owner/group in case <a href=""><code>suid</code></a> flag is set (<code>nosuid</code>). Usually those capabilities needed only for root file system. For <code>/data</code> and <code>/boot</code> I have also set <code>noexec</code> which disables binary execution from those partitions as these are not intended to store any programs.</p> <p>To make files readable/writable/executable only for author user by default I have replaced <code>umask 022</code> with <code>umask 077</code> in <code>/etc/profile</code>. The former gave read access by default to the user's group and other users which is not really necessary. After or week or so abandoned this idea as it resulted in more commands requiring root to execute which is an unpleasant security implications (probably outweighing the benefits).</p> <p>Because running an advanced text editor (e.g. vim) as <code>root</code> is is equivalent running a shell as root which ought to be avoided when possible. To mitigate the problem <code>sudoedit</code> command exists which will copy over the file after editing it as an unprivileged user. <code>sudoedit</code> will use editor configuration from <code>/etc/sudoers</code> so we're all set.</p> <h3 id="dnssec">DNSSEC</h3> <p>DNSSEC to DNS is (roughly) what HTTPS (SSL) is to HTTP. Both SSL and DNSSEC will ensure authenticity of the received information from a web server and a DNS server respectively. The same as with HTTPS, DNS will protect only domains that are explicitly using it. The difference is that DNSSEC is only about verifying and not about encrypting data. There's really not that many reasons to encrypt DNS traffic as the only sensitive information it contains is names of the resources you connect. This information will be revealed to your ISP anyway when you'll send a network request to the resource which IP address you've identified via DNS. The only solution to keep network resources you connect to private is some sort of VPN.</p> <p>The problem is, unlike SSL, DNSSEC is not widely supported by networking software so the easiest to employ it is to use a DNS proxy. I've chosen to install local <a href="">Unbound</a> DNS server that supports DNSSEC and also does caching. Unbound is lightweight (it will be running on my laptop) and configuration is very simple. To install Unbound with DNSSEC support you'll need <code>unboud</code> and <code>expat</code> packages (the second is probably installed already). I've stripped down the <code>/etc/unbound/unbound.conf</code> file of configurations set to defaults and made it listen to IPv6 localhost and prefer IPv6 when performing queries (to be ready when IPv6 will take over the world):</p> <pre><code>server: trust-anchor-file: trusted-key.key interface: ::1 prefer-ip6: yes </code></pre> <p>This is enough to use Unbound but ArchWiki also recommends to update information about the root DNS servers rather than relying on defaults, see <a href="">here</a>. <code>unbound-checkconf</code> can be used to verify configuration for errors. To run Unbound I've just started and enabled <code>unbound</code> systemd service. See <a href="">this section</a> to verify that DNSSEC is working.</p> <p>To use my own DNS resolver it needs to be specified in <code>/etc/resolv.conf</code> as <code>nameserver ::1</code> (actually that's the only required parameter for the file), see <code>man 5 resolv.conf</code>. Some networking software (dhcpcd and netctl, e.g.) might change <code>resolv.con</code> in runtime which is not desirable in my case (related <a href="">ArchWiki section</a>. To prevent that from happening I've grepped though my netctl profiles (<code>/etc/netcl/</code>) for <code>DNS=</code> to make sure no one is overriding DNS configuration (see <code>man 5 netctl.profile</code>) and added <code>nohook resolv.conf</code> to <code>/etc/dhcpcd.conf</code> (see <code>man 5 dhcpcd.conf</code>). Furthermore, to make sure that the file wouldn't be modified unintentionally, I've made it immutable (<code>chattr +i</code>).</p> <h3 id="firewall">Firewall</h3> <p>I have set up firewall manually using <a href="">iptables</a>. My configuration is basically a simple stateful firewall as described by <a href="">ArchWiki</a> with some minor modifications. I've used an example file from <code>/etc/iptables/simple_firewall.rules</code> as a template to configure my firewall.</p> <p>Default policy for <code>INPUT</code> and <code>FORWARD</code> chains is <code>DROP</code> and <code>ACCEPT</code> for <code>OUTPUT</code>. There're five custom chains in my configuration: <code>logdrop</code> - logs and drops packets; <code>logrejectproto</code>, <code>logrejectport</code>, <code>logrejectrst</code> - log and reject packets with respectively ICMP protocol and port unreachable and TCP reset; <code>limitlog</code> - will log packets rate limiting it (to 5 packets per second, logging first 5 packets from each burst). All the rules I have are for <code>INPUT</code> chain: accept localhost packages, accept packets from <code>RELATED</code>/<code>ESTABLISHED</code> connections and drop from <code>INVALID</code>, accept ICMP pings. After these rules is the point where I can add rules for opening ports if I'll need to. Rest of the packets are rejected with appropriate responses by jumping to the custom chains described above: resets for TCP, port unreachable for UDP and protocol unreachable otherwise.</p> <p>IPv6 firewall rules are mostly the same as IPv4 rules. Modifications include replaced ICMP ping rule with ICMPv6 ping rule, the two ICMP rejects need to be replaced with ICMPv6 <code>icmp6-adm-prohibited</code>. <a href="">Arch Linux wiki article</a> also has recommendations how to allow Neighbor Discovery Protocol and other IPv6 peculiarities.</p> <p>After configuration both <code>iptables</code> and <code>ip6tables</code> systemd services need to be enabled.</p> <h2 id="utilities">Utilities</h2> <p>Nothing fancy here, just a few small and useful programs I've installed along the way:</p> <ul> <li><code>tree</code> - draw a tree of files and directories</li> <li><code>shellcheck</code> - check shell scripts for common pitfalls and mistakes</li> <li><code>pacutils</code> - provides <code>paccheck</code> utility to see what pacman packages were modified after installation</li> <li><code>pkgstats</code> - help Arch Linux maintainers by sharing anonymous package usage stats. I had to mask its systemd timer (<code>systemctl mask pkgstats.timer</code> to disable automatic uploads of the reports.</li> <li><code>wget</code> - command-line file downloader</li> </ul> <h2 id="rust">Rust</h2> <p>Pretty much boils down to installing <code>rustup</code> which is available from the official repositories. See <code>rustup -h</code> for details on installing, updating and selecting default Rust toolchain. <a href="">The wiki page</a> on the subject is pretty extensive and detailed as well.</p> <h2 id="caveats">Caveats</h2> <p>I'd like to change GNU screen's command prefix to Ctrl-I as it is on the home row of Workman layout (unlike Ctrl-J) but for some reason Tab gets intercepted in that case and do not work for bash completion.</p> <p>To reconnect to different WiFi network <code>wifi-menu</code> should be used. <code>netctl start &lt;profile&gt;</code> fails because interface is already up.</p> <h2 id="conclusions">Conclusions</h2> <p>Getting some work done from virtual console is definitely possible and it's definitely useful to be able to do so. A lot of things work in a different way in the virtual console and you'll have to adapt (e.g. scrolling back, copying-pasting, working with multiple shells) but it's not that inconvenient once you've learned your way around. I'll probably use a graphical terminal emulator even for my command line work from now on. Main limitation I've run into is unavailability of large bitmap fonts. Even the largest I've found so far (Terminus 32) is a bit too small for me. Going further this route I'd probably needed to make my own console fonts. I've got an impression that it would be more convenient to work with a virtual console on a desktop because of availability of full size keyboard (scollback e.g. relies on PageUp/PageDown keys) and lower screen pixel density.</p> <h2 id="credits">Credits</h2> <p>As well as in the first part of the guide I've relied heavily on <a href="">ArchWiki</a> in particular <a href="">General recommendations section</a>.</p> <p><img src="" alt="img" /></p> Paper 2017-03-21T00:00:00+00:00 2018-09-25T00:00:00Z <p><small><em>(written on December 23, 2016)</em></small></p> <p>After trying almost (hello, org-mode) all digital solutions for organizing my life out there and spending hundreds of dollars and countless hours worth of time I've bought myself a paper notebook. And I'm happy that I did. I used to be a firm proponent of keeping one's life digital as much as possible so it may sound like a strange decision. And indeed it is. Not that I've never used a paper notebook before. But it never was front and center of organizing my life.</p> <p>The notebook never get out of charge, never crashes, I'll never lose my notes because of a bug in the synchronization. Simple things is what brings happiness.</p> Build Yourself Arch Linux 2016-11-21T00:00:00+00:00 2018-09-25T00:00:00Z <h1 id="part-1-base-system">Part 1: Base System</h1> <p>This is the first part of a series of articles (<a href="">part 2</a>, <a href="">part 3</a>) on how I set up dual boot Arch Linux on my Mid 2014 MacBook Pro. At the end of this part I'll have a bootable but completely minimal installation of Arch on an encrypted partition without any tuning. Ability to boot to Mac OS will be preserved.</p> <h3 id="updates">Updates</h3> <p>December 11, 2016: clarify how to customize list of installed base package; add instructions how to start wired network automatically; describe a separate <code>/data</code> partition.</p> <p>January 8, 2017: provide fix for startup hang caused by dhcpcd systemd service; add mkinitcpio hook for USB keyboard detection; use larger console font.</p> <p>April 2, 2017: recommend overriding <code>[email protected]</code> systemd unit settings instead of editing the original file.</p> <h2 id="why">Why?</h2> <p>I had the idea of going back to using Linux for some time now. It's hard for me to pinpoint a specific reasons for that. I've got feeling like I'm missing out lots of interesting stuff going on in the Linux world. Like the renaissance of containerisation and virtualization technologies, unikernels. While it's true that almost all of that is made available for Mac, it oftentimes feels like terrible hacks has been done to make it work. Also, I'm inclined to think that I haven't learned as much about computers as I could have during my time on OS X. I'm usually hesitant to learn proprietary things, well because of irrelevance of the knowledge outside of a cage. That way, I didn't get to know OS X itself that well. I have learned the other side of &quot;just works&quot; - the feeling of helplessness when things do not work.</p> <p><a href="">Latest Apple special event</a> was just enough bullshit I can take. Do whatever you want with your TouchBar and feedbackless keyboard, Apple, but I'm leaving. But I'm not going to throw the computer I've spent about $3K on, so here I am, installing Arch Linux on my MacBook.</p> <p>Given the above arguments one may ask why not just install Ubuntu for example. While Ubuntu is a good system to start with (I've used it myself for about two years) I wanted to try something different this time. The primary reason to try Arch Linux is learning. I have learned <em>a lot</em> when installing Arch Linux and writing this tutorial. The other thing is that I enjoy feeling of control over my machine. I like to know exactly what is installed and what is running on my computer, I don't want to be distracted by the stuff I do not use. Also I like the idea of <a href="">rolling release</a>. To be able to run the latest version of packages and the kernel is nice.</p> <p>While there's no shortage of tutorials for installing Arch Linux on MacBook, everyone's case is different, so my experience may be useful to someone. I'm not intend to write an exhaustive tutorial with every command one will need to execute, but rather an outline with links to already existing materials and how my personal setup differs. Please follow an awesome <a href="">Installation guide</a> and <a href="">MacBook article</a> on ArchWiki, think and read (or at least skim through) man pages of unfamiliar commands. I'm not liable for any damage caused by my writings to your computer so be careful. Let's go.</p> <h2 id="preparing-the-installation">Preparing the installation</h2> <p>Before start you'll need to buy/borrow an Ethernet adapter for you Mac as WiFi probably won't work out of the box. It is possible to install Arch Linux using WiFi but I decided to skip the hassle. See <a href=",x#Installation">the wiki</a> if you're interested. Also, I would recommend to use an external display to no be blinded by your laptop's brightness during the installation. I'm going to keep OS X on my laptop for foreseeable future and dual boot into Arch Linux. If you do not intend to do that some parts of the installation process will be simpler (at least you would not have to worry about wiping the wrong partition).</p> <p>To keep OS X partition untouched I needed to shrink it to make some space for Linux. I split my 256G storage in half by shrinking the partition using OS X. There're some hiccups: OS X El Capitan uses Core Storage Volumes by default and they are not easily resizeable. To resize the partition you'll need <a href="">convert it back</a> to &quot;native volume&quot;. Before conversion volume needs to be unencrypted, you'll have to turn FileVault off and wait a few ours until the disc will be decrypted. After that you can resize the OS X partition (using Disk Utils app or <code>diskutil</code> command) and turn FileVault back on.</p> <p>First I had to <a href="">grab an Arch Linux ISO</a> which is around 800 MB. I always forget to verify downloads but it's better to not skip this step. Read <a href="">here</a> how to verify the downloaded ISO using GPG. If you have no GPG installed, checksums are better than nothing. The GPG signature and the checksums are available on the download page. Preparing installation USB is dead simple, see <a href="">the instructions</a>. Reboot, hold Alt as soon as you hear startup sound and select your USB stick as a boot device. You're in Arch Linux installation shell now.</p> <p>Before proceeding I needed to configure the installation environment a little bit. <a href="">Set font</a> to something bigger (12x22) to be able actually see something (I use iso01-12x22). If you like me use something other than standard US layout, you'll need to <a href="">change the keymap</a>. Because I use <a href="">Workman layout</a> I had to download a keymap for it from <a href="">GitHub</a>. If you want to remap Caps Lock to Control immediately, edit your preferred keymap file and set keycode 58 to <code>Control</code> as I did <a href="">here</a> for Workman. To download something from GitHub it is possible to use <code>elinks</code> browser from the shell. <a href="">Adjust the system clock</a> and lets continue onto the next step.</p> <h2 id="disk-partition">Disk partition</h2> <p>I want to have full disk encryption enabled for my Linux partitions. To do that I've decided to go with LVM on LUKS as one from <a href="">available options</a>. It means that there's a single container encrypted using dm-crypt with LUKS and inside of the container there are multiple logical volumes managed by LVM. dm-crypt is the standard encryption functionality provided by the kernel itself and LUKS is a convenient utility to manage it. LVM is a flexible solution to manage logical partitions independently of the disk layout.</p> <p>There's a <a href="">nice article</a> on ArchWiki to help you decide what partition layout you want. I have settled on the following scheme: 20 GB for root partition; 12 GB for <code>/var</code> to prevent bewildered logs from eating up the space; 16 GB for swap partition; 18 GB for <code>/home</code> to store operational data and user configurations; as it is generally not recommended to share <code>/home</code> across OS installations I have created a separate <code>/data</code> partition for longer term storage (~47 GB).</p> <p>First we would need a partition for our LVM container. Use <code>fdisk -l</code> (or any other <a href="">partition tool</a>) to take a look at what layout you've got already. In my case I'm interested in <code>/dev/sda</code> half of it occupied by OS X as <code>/dev/sda1</code> and half of it free. I used <code>gdisk</code> to create a new partition of type <code>8E00</code> (Linux LVM). It was created as <code>/dev/sda2</code>. That's pretty much all I need from <code>gdisk</code>, all the LVM volumes will be leaving inside of container <code>/dev/sda2</code>. NB: Apple recommends to have 200Mb gap between partitions for possible future use, use <code>+200M</code> as a start sector of your new partition, if you want to. We're going to leave the existing <a href="">UEFI</a> boot partition as is so you would not need to create a new one for <code>/boot</code>.</p> <p>If you want to be secure, the newly created container partition needs to be <a href="">securely wiped</a> first. Please, try to not mistype your <em>empty</em> partition number. <a href="">Here</a> is how to initialize encrypted LUKS container and <a href="">slice it into partitions</a>. Skip sections about the boot partition as we have it already. I've initialized LUKS with default <a href="">encryption options</a> as they seemed to roughly match what people recommend to use anyway. By this time you should have all the partitions mounted and formatted. Now mount your <a href="">ESP</a> to <code>/mnt/boot</code>.</p> <h2 id="installing-and-configuring-base-system">Installing and configuring base system</h2> <p>First, check out if <a href="">list of mirror servers</a> looks sane. The highest priority servers were not from beyond an ocean and packages download speed was pretty high, so I'm fine with the defaults.</p> <p>Now it's time to <a href="">markup the file system of your new OS and install the most necessary packages</a>. If you want to be absolutely minimal and customize list of installed base packages, pass <code>-i</code> option to <code>pacstrap</code> and read <a href="">here</a> about syntax used. I do not need <code>nano</code> or <code>mdadm</code> (RAID management tool), for example. To keep my modified version of Workman layout, I copied the <code>.kmap</code> file to <code>/mnt/usr/share/kbd/keymaps/</code>. One more thing to do before you'll switch into your brand new environment is to <a href="">generate fstab</a>.</p> <p>Behold and <a href="">enter the brave new world</a>! Configure <a href="">timekeeping</a>, <a href="">font</a>, <a href="">locale and keyboard</a>, <a href="">hostname</a>. To figure out if your hardware clock is indeed set to UTC compare <code>hwclock --utc</code> and <code>hwclock --localtime</code> (unless you live in UTC :P). While wired network should work out of the box (I'm going to configure WiFi later), it won't be started automatically. <a href="">Set up root password</a>, it will be possible to create non root users later.</p> <p>I'm not a big fun of tiny fonts, so I installed <a href=""><code>terminus-font</code></a> and used <code>ter-132n</code> (Terminus ISO8859-1 16x32 normal) for the persistent configuration. It looks much nicer and has much more comfortable to read size than the default 12x22 fonts.</p> <p>To have network up after boot you'll need to <code>systemctl enable</code> <a href="">the DHCP service</a>. It is recommended to enable DHCP for the specific interface. This is good to not turn on Wi-Fi automatically until requested and avoid possible conflicts with network management software. I ran into problems with this approach however: systemd was waiting for dhcpcd to be started for the wired interface for 90 seconds if the cable was unplugged. I found a solution on Arch Linux <a href="">bug tracker</a>. See the thread for the patch, you can override <code>[email protected]</code> template settings in <code>/etc/systemd/system/[email protected]/override.conf</code> (more details on overriding systemd units configuration can be found in <code>man systemd.unit</code>).</p> <h2 id="initramfs">initramfs</h2> <p>When computer is turned on the first thing that it runs is UEFI firmware. UEFI than launches an UEFI executable, systemd-boot boot manager that we will install later. systemd-boot is called &quot;boot manager&quot; and not &quot;bootloader&quot; because all it can do is to launch another UEFI applications from ESP. In our case it would be the Linux kernel. EFISTUB is a feature of newer kernels that allows it to act as an UEFI executable. Kernel than loads initramfs which is a set of prebuilt hooks to prepare the system to boot. After that actual OS boot starts from an init process, systemd in our case.</p> <p>If root partition is encrypted so to be able to boot it needs to be decrypted first. To enable that <a href="">initramfs hooks</a> are required. To use custom console font and keyboard layout to enter encrypted partition password add <code>consolefont</code> and <code>keymap</code> hooks before <code>encryption</code>. I also experienced my DasKeyboard (connected via monitor USB hub) not being recognized sometimes at encryption password input stage. To avoid the problem add <code>keyboard</code> hook as well. After mkinitcpio, the tool used to build initramfs, is reconfigured, initramfs needs to be <a href="">rebuild</a>.</p> <h2 id="bootloader">Bootloader</h2> <p><a href="">systemd-boot</a> is a pretty simple boot manager to set up so I'll use it to start. Later I'd like to try rEFInd and its <a href="">beautiful themes</a>. Installation of systemd-boot comes down to <code>bootctl install</code>. <a href="">Here</a> is how to configure it. Check out <a href="">the section</a> on configuring a bootloader for LVM on LUKS is of particular interest. Notable you'll need to add only Arch Linux entry by hands, OS X will be detected automatically at boot time based on the information available on ESP.</p> <p>It is highly recommended to <a href="">enable CPU microcode updates</a> to ensure system stability. Don't forget to update systemd-boot Arch Linux entry. You should be able to verify that you did everything properly by searching for &quot;microcode updated&quot; in system startup log using <code>journalctl</code>.</p> <p>Now you're ready to <a href="">reboot</a> into your new shiny OS. On startup you should be presented with systemd-boot menu. After selecting Arch Linux and before boot you'll be presented with the prompt for your encryption password. After that you should see command line login prompt. Use <code>root</code> user for now.</p> <h2 id="caveats">Caveats</h2> <p>While most of the things works as expected there're few hiccups worth mentioning. First I've noticed that <code>/var</code> fails to be unmounted on shutdown. It looks like the problem is with the partition being busy because it's used by systemd to write system shutdown logs. Quick search revealed that I'm not the only one having the problem unmounting separate <code>/var</code> partition and it should be forcibly unmounted by the end of shutdown process anyway. Still I'd like to make the unmounting error go.</p> <p>Right now I have to type my password twice: once to unlock the encrypted partition and once to log in as a <code>root</code> user. To not have to do it twice I'm probably going to enable automatic login as a non-root user.</p> <p>Be careful with doing anything while on battery as until suspend will be setup the machine will halt when run out of battery.</p> <h2 id="closing-thoughts">Closing thoughts</h2> <p>If you will find any problems I've failed to mention or any mistakes while following this guide please email me about them and I'll update the post. I'll appreciate any suggestions for improvements.</p> <p>In some time I hope to post the second part of the guide. I'm going to focus on things like user and power management, wireless and continue to build the system upon the base I have.</p> <h2 id="credits">Credits</h2> <p>To write this guide I relied heavily on <a href="">ArchWiki</a>, and three tutorials by <a href="">Loïc Pefferkorn</a>, <a href="">Michael Chladek</a> and <a href="">0xADADA</a></p> Hash Collision 2016-11-20T00:00:00+00:00 2018-09-25T00:00:00Z <h1 id="or-why-i-m-leaving-google-s-services">Or Why I'm Leaving Google's Services</h1> <p>Recently there was a lot of <a href="">talk</a> about Google services on Hacker News. Concerns people have about loosing control over their data prompted me to check out how I use Google.</p> <p>There're a few reasons to worry: loosing your data, giving to much personal information to one company and having someone (or something) reading (or scanning) your private stuff. While I tried to limit my use of Google's services in the past and moved off the major ones like Gmail, Calendar, Contacts there's still a lot of my stuff on Google's servers. Especially given my switch to Android about half a year ago.</p> <p>I was frightened a little bit when I realised that all of my photo archive is in Google Photos and I do not have any backups. While I do trust Google to store my data reliably, I do not want to be denied access to my entire photo collection because of conflict of interest with the company (or simply because of a natural disaster). I decided to download my photos using <a href="">Google Takeout</a> to have at least a local copy until I figure out how I'm going to back up my data properly and find another photo storage solution. I'm looking at Dropbox and Flickr now, Flickr should be more convenient, but Dropbox can be more private. Note how a privacy became a relative term nowadays.</p> <p>I was surprised how much Google knows about me. You'll be surprised as well after exploring <a href="">privacy section</a> of your Google account settings. To name a few things: my apps usage and passwords, voice and web searches, location history and maps searches, YouTube viewing history, fitness data. That's huge amount of personal information if you think about it. And that's given I'm a pretty modest user of Google, there're people keeping literally everything in Google's services. Lots the stuff I've shared with Google is actually because I've tried to use my Nexus the Google way from on the beginning. You'd rather not agreed blindly to everything your new Android phone asks you about during setup. Thankfully, it's possible to turn off most of the tracking but it takes effort.</p> <p>While reading about privacy in relation to Google's services I've stumbled upon <a href="">PhotoDNA</a>. The technology is used to scan images to detect unlawful activity like child pornography. And this technology (or something similar) is known to be employed by Google. While I'm totally for fight against child pornography, the particular methods used do worry me. Metaphorically speaking, I don't want to get into troubles because of a <a href="">hash collision</a>. There's always a chance that machine learning used for crime detection will give a false positive result, it will <em>never</em> be 100% accurate. Also, there're bugs. I don't want to be part of the game, I don't want to engage with law enforcements because of a mistake made by an algorithm. I would rather give up some convenience and had my personal data encrypted in the cloud to have it never scanned regardless of intentions. Also with iOS 10 Apple <a href="">showed</a> that it's viable to employ machine learning locally (even on a phone) to give conveniences like categorization and face recognition which were usually associated with processing of photos in the cloud.</p> <p>(Hash is a supposedly unique fixed size value associated with some input. But because there is limited number of fixed size values and unlimited number of inputs, inevitably some different inputs will be associated with the same hashes. That's called hash collision. The same basic idea applies to PhotoDNA: two different pictures, one ill and one not, will trigger the same result given large number of samples and an imprecise system, which machine learning always is.)</p> <p>Going forward I'm going to log in into Google account in my browser only when needed to not have myself constantly tracked on the web. Most of the services are perfectly functional without an account, like Translate, Maps, Search (I use it as a fallback for DuckDuckGo), even YouTube, despite completely crappy default recommendations. Also I'm thinking to create a separate accounts for the services I'll continue to use like Alerts, Webmaster Tools and for my Android phone. Some services I'm going to ditch, like Analytics, Photos and Hangouts. To get my pictures off Google Photos will be the hardest but I'd like to do it.</p> <p>I'm not agitating you to leave Google services but be aware what information do you give away and remember to keep backups of your data. If the topic is of interest to you I'll recommend <a href="">a very good (despite quite old) article</a> about privacy and encryption by the author of <a href="">GPG</a> Philip Zimmermann.</p> On Writer's Block 2016-11-11T00:00:00+00:00 2018-09-25T00:00:00Z <p>If you'll have a look on <a href="">timeline</a> of this blog of mine it will be obvious that I'm dealing with writer's block. I always wanted to have <em>real</em> blog, to do <code>git push</code> to publish, to build it <em>from scratch</em> myself. And about 9 month ago I got it. I found <a href="">Hakyll</a> which felt awesome, like assembling a model airplane from parts. I've refined it over couple of weeks and got what I wanted to have. The first post event caught some attention as I've decided to post it to Reddit. There has been both positive and negative <a href="">feedback</a>. But there will <em>always</em> be negative feedback, so it was fine.</p> <p>But than I stopped. Not that I stopped to write. I still drafted quite a few articles, had a few enlightening ideas of posts. But I never got to publish any of that. So I decided to <em>post</em> about writer's block as I got to know what it is too well.</p> <p>How it usually was:</p> <ol> <li>get excited</li> <li>type 500-100 words and get my thoughts out</li> <li>calm down, save the file</li> <li>whatever, I'll edit it tomorrow</li> <li>the life goes on</li> </ol> <p>What I decided to do instead it to &quot;get it posted&quot;. To start writing with an intention to post. It doesn't matter what the post will be about, whatever consumes my mind the most at the moment, get it <em>out</em>.</p> <p>How I decided to accomplish what I haven't been able to do? First, I'm already fed up with my inability to post. Motivation is the key and all of that. The fact that most of my friends have empty blogs do not help, I acknowledge. Reduce friction (you probably know that already): I started to edit a post template directly rather than a file somewhere so publish is literally one command. Do it fast: your energy won't last long. Edit immediately, but not during writing. I never got to edit most of my drafts. Some <a href="">deep house</a> may help, if that's your kind of thing. Oh yeah, shutdown your Internet.</p> <p>Finally, keep it short (one screen with large font). That's it. I hope you'll succeed one day as well.</p> <p>Bonus tip: you can edit posts <em>after</em> you published them :P</p> Space Leader 2016-02-27T00:00:00+00:00 2018-09-25T00:00:00Z <p><small><em>(written on December 12, 2015)</em></small></p> <h1 id="why-i-use-space-as-my-vim-leader-key">Why I use space as my Vim leader key</h1> <p>One of the best Vim productivity boosts is to configure your leader key. What leader key does - it gives you a namespace for custom mappings. No default Vim mappings use leader key, so you're free to choose whatever shortcuts you like without worrying about conflicts with some predefined mappings. Considering this it makes sense to define custom mappings using leader key. It also facilitates remembering of shortcuts by providing mental separation for the ones you've crafted yourself. To activate a shortcut you just press leader key and than a specific mapping, e.g. I use <code>&lt;leader&gt;w</code> to save current file. Configuring such a mapping is quite easy: add <code>map &lt;leader&gt;w</code> to your .vimrc and you're done. Noticing things you do repeatedly working day-to-day in Vim and creating custom mappings for them will allow you to save a little bit of time constantly and will make editing with Vim more effortless.</p> <p>Lots of Vim tutorials recommend to use <code>,</code> as leader key. However I see a few benefits of using spacebar instead.</p> <h2 id="it-doesn-t-override-any-default-vim-mappings">It doesn't override any default Vim mappings</h2> <p>Contrary to <code>,</code>, which is used to move cursor to previous occurrence of a character navigated to using <code>t</code> or <code>f</code>, space doesn't do any particularly useful in normal mode by default. Yes, it does move cursor to the next character, but the motion is already covered by <code>l</code> key anyway (or an arrow key if you like to use those). So we could safely override default space behaviour. <code>,</code> is quite important for my workflow on the opposite side.</p> <h2 id="space-is-easy-to-reach-to">Space is easy to reach to</h2> <p>Most of your custom mappings will use leader key, so it should be better easy to type. Spacebar is a huge key placed very conveniently on most keyboards. It's a safe bet regarding ergonomics.</p> <h2 id="space-is-symmetrical">Space is symmetrical</h2> <p>This one alone is a good reason for me to use spacebar as my leader key. Given it's <em>equally</em> easily reachable for <em>both hands</em>, space leaves me with all the alphanumeric keys as convenient shortcut options. Again in contrast to <code>,</code> which is typed by the right hand making left hand letters more comfortable options for typing shortcuts without stretching fingers.</p> <p>There's even a text editor (a flavour of Emacs) based around the idea of space key as a gateway to all the editor commands called <a href="">Spacemacs</a>. I've been playing around with it a little bit recently and it has been positive experience so far. Spacemacs goes one step further by providing instant visual feedback about available commands once you've pressed space key.</p> <h2 id="think-for-yourself">Think for yourself</h2> <p>This way to use leader key fits <em>my</em> workflow very well. It may or may not be as efficient in your case. Be critical of that &quot;mappings every Vimmer should use&quot; tips (and of this article as well). Try what works best for <em>you</em>, your fingers and your keyboard. If you're interested more in how I use Vim check out my <a href="">dotfiles</a> GitHub repo. Happy editing!</p> <p>It would be interesting to know how do you use leader key, join the <a href="">Reddit discussion</a>.</p>