Planet Linux Plumbers Conf

June 03, 2009

Darrick Wong

Picspam!

/me rounded up a bunch of (old) panoramas and put them into the high-definition panorama viewer. Be sure to check out the (huge spike in memory cache when you load the) panorama previewer (click the "See All" button).

June 03, 2009 02:27 AM

August 27, 2008

Stephen Hemminger

Exploring transactional filesystems

In order to implement router style semantics, Vyatta allows setting many different configuration variables and then applying them all at once with a commit command. Currently, this is implemented by a combination of shell magic and unionfs. The problem is that keeping unionfs up to date and fixing the resulting crashes is major pain.

There must be better alternatives, current options include:
  • Replace unionfs with aufs which has less users yelling at it and more developers.
  • Use a filesystem like btrfs which has snapshots. This changes the model and makes api's like "what changed?" hard to implement.
  • Move to a pure userspace model using git. The problem here is that git as currently written is meant for users not transactions.
  • Use combination of copy, bind mount, and rsync.
  • Use a database for configuration. This is easier for general queries but is the most work. Conversion from existing format would be a pain.
Looks like a fun/hard problem. Don't expect any resolution soon.

by Linux Network Plumber (noreply@blogger.com) at August 27, 2008 10:20 PM

January 26, 2020

Paul E. McKenney

Confessions of a Recovering Proprietary Programmer, Part XVII

One of the gatherings I attended last year featured a young man asking if anyone felt comfortable doing git rebase “without adult supervision”, as he put it. He seemed as surprised to see anyone answer in the affirmative as I was to see only a very few people so answer. This seems to me to be a suboptimal state of affairs, and thus this post describes how you, too, can learn to become comfortable doing git rebase “without adult supervision”.

Use gitk to See What You Are Doing

The first trick is to be able to see what you are doing while you are doing it. This is nothing particularly obscure or new, and is in fact why screen editors are preferred over line-oriented editors (remember ed?). And gitk displays your commits and shows how they are connected, including any branching and merging. The current commit is evident (yellow rather than blue circle) as is the current branch, if any (branch name in bold font). As with screen editors, this display helps avoid inevitable errors stemming from you and git disagreeing on the state of the repository. Such disagreements were especially common when I was first learning git. Given that git always prevailed in these sorts of disagreements, I heartily recommend using gitk even when you are restricting yourself to the less advanced git commands.

Note that gitk opens a new window, which may not work in all environments. In such cases, the --graph --pretty=oneline arguments to the git log command will give you a static ASCII-art approximation of the gitk display. As such, this approach is similar to using a line-oriented editor, but printing out the local lines every so often. In other words, it is better than nothing, but not as good as might be hoped for.

Fortunately, one of my colleagues pointed me at tig, which provides a dynamic ASCII-art display of the selected commits. This is again not as good as gitk, but it is probably as good as it gets in a text-only environment.

These tools do have their limits, and other techniques are required if you are actively rearranging more than a few hundred commits. If you are in that situation, you should look into the workflows used by high-level maintainers or by the -stable maintainer, who commonly wrangle many hundreds or even thousands of commits. Extreme numbers of commits will of course require significant automation, and many large-scale maintainers do in fact support their workflows with elaborate scripting.

Doing advanced git work without being able to see what you are doing is about as much a recipe for success as chopping wood in the dark. So do yourself a favor and use tools that allow you to see what you are doing!

Make Sure You Can Get Back To Where You Started

A common git rebase horror story involves a mistake made while rebasing, but with the git garbage collector erasing the starting point, so that there is no going back. As the old saying goes, “to err is human”, so such stories are all too plausible. But it is dead simple to give this horror story a happy ending: Simply create a branch at your starting point before doing git rebase:

git branch starting-point
git rebase -i --onto destination-commit base-commit rebase-branch
# The rebased commits are broken, perhaps misresolved conflicts?
git checkout starting-point # or maybe: git checkout -B rebase-branch starting-branch


Alternatively, if you are using git in a distributed environment, you can push all your changes to the master repository before trying the unfamiliar command. Then if things go wrong, you can simply destroy your copy, re-clone the repository, and start over.

Whichever approach you choose, the benefit of ensuring that you can return to your starting point is the ability to repeat the git rebase as many times as needed to arrive at the desired result. Sort of like playing a video game, when you think about it.

Practice on an Experimental Repository

On-the-job training can be a wonderful thing, but sometimes it is better to create an experimental repository for the sole purpose of practicing your git commands. But sometimes, you need a repository with lots of commits to provide a realistic environment for your practice session. In that case, it might be worthwhile to clone another copy of your working repository and do your practicing there. After all, you can always remove the repository after you have finished practicing.

And there are some commands that have such far-reaching effects that I always do a dry-run on a sacrificial repository before trying it in real life. The poster boy for such a command is git filter-branch, which has impressive power for both good and evil.

 

In summary, to use advanced git commands without adult supervision, first make sure that you can see what you are doing, then make sure that you can get back to where you started, and finally, practice makes perfect!

by paulmck at January 26, 2020 01:22 AM

January 20, 2020

Paul E. McKenney

The Old Man and His Macbook

I received a MacBook at the same time I received the smartphone. This was not my first encounter with a Mac, in fact, I long ago had the privilege of trying out a Lisa. I occasionally made use of the original Macintosh (perhaps most notably to prepare a resume when applying for a job at Sequent), and even briefly owned an iMac, purchased to run some educational software for my children. But that iMac was my last close contact with the Macintosh line, some 20 years before the MacBook: Since then, I have used Windows and more recently, Linux.

So how does the MacBook compare? Let's start with some positives:

  • Small and light package, especially when compared to my rcutorture-capable ThinkPad. On the other hand, the MacBook would not be particularly useful for running rcutorture.
  • Much of the familiar UNIX userspace is right at my fingertips.
  • The GUI remembers which windows were on the external display, and restores them when plugged back into that display.
  • Automatically powers off when not in use, but resumes where you left off.
  • Most (maybe all) applications resume where they left off after rebooting for an upgrade, which was an extremely pleasant surprise.
  • Wireless works seamlessly.


There are of course some annoyances:

  • My typing speed and accuracy took a serious hit. Upon closer inspection, this turned out to be due to the keyboard being smaller than standard. I have no idea why this “interesting” design choice was made, given that there appears to be ample room for full-sized keys. Where possible, I connect a full-sized keyboard, thus restoring full-speed typing.
  • I detest trackpads, but that is the only built-in mouse available, which defeats my usual strategy of disabling them. As with the keyboard, where possible I connect a full-sized mouse. In pleasing contrast to the earlier Macs, this MacBook understands that a mouse can have more than one button.
  • I found myself detesting the MacBook trackpad even more than usual, in part because brushing up against it can result in obnoxious pop-up windows offering to sell me songs and other products related to RCU. I disabled this advertising “feature” only to find that it was now putting up obnoxious pop-up windows offering to look up RCU-related words in the dictionary. In both cases, these pop-up windows grab focus, which makes them especially unfriendly to touch-typists. Again, the solution is to attach a full-sized keyboard and standard mouse. Perhaps my next trip will motivate me to disable this misfeature, but who knows what other misfeature lies hidden behind it?
  • Connectivity. You want to connect to video? A memory stick? Ethernet? You will need a special adapter.
  • Command key instead of control key for cut-and-paste. Nor can I reasonably remap the keys, at least not if I want to continue using control-C to interrupt unruly UNIX-style applications. On the other hand, I freely admit that Linux's rather anarchic approach to paste buffers is at best an acquired taste.
  • The control key appears only on the left-hand side of the keyboard, which is also unfriendly to touch-typists.
  • Multiple workspaces are a bit spooky. They sometimes change order, or maybe I am accidentally hitting some key combination that moves them. Thankfully, it is very easy to move them where you want them: Control-uparrow, then drag and drop with the mouse.
  • I tried porting perfbook, but TexLive took forever to install. I ran out of patience long before it ran out of whatever it was downloading.


Overall impression? It is yet another laptop, with its own advantages, quirks, odd corners, and downsides. I can see how people who grew up on Macbook and who use nothing else could grow to love it passionately. But switching back and forth between MacBook and Linux is a bit jarring, though of course MacBook and Linux have much more in common than did the five different systems I switched back and forth between in the late 1970s.

My current plan is to stick with it for a year (nine months left!), and decide where to go from there. I might continue sticking with it, or I might try moving to Linux. We will see!

by paulmck at January 20, 2020 01:52 AM

December 29, 2019

Sri Ramkrishna

Linux Application Summit 2019 – retrospective

I wanted to pen something before the year is gone about the recent Linux Application Summit 2019. This is the 3rd iteration of the conference and each iteration has moved the needle forward.

The thing that excites me going forward is what we can do when we work together between our various free and open source communities. LAS represents forming a partnership and building a new community around applications. By itself the ‘desktop’ doesn’t mean much to the larger open source ecosystems not because it isn’t important because the frenetic pace of open source community expansion have moved so fast that these communities do not have organizational history of foundational technologies that our communities have built over the years that they use every day and maintain.

To educate them would be too large of a task instead we need to capitalize on the hunger for technology, toolchains, and experience that build and possess. We can do that by presenting ourselves as the apps community which presents no prejudice to the outside community. We own apps, because we own the mindshare through maturity, experience, and communities that spring around it.

From here, we can start representing apps not just through the main Linux App Summit, but through other venues. Create the Apps tracks at FOSDEM, Linux Foundation events, Plumbers etc.

In the coming weeks, I will be working with other conference organizers around the globe to see how we can create these tracks and have ourselves represent ourselves there.

LAS represented the successful creation of a meta-community and from there we can build the influence we need to build the norms we need to build on the desktop.

Looking forward to 2020!

Big thanks to the GNOME Foundation for their support of Linux Application Summit.

by sri at December 29, 2019 09:04 PM

October 23, 2019

Sri Ramkrishna

Let’s fight back against patent trolls

The GNOME Foundation has taken the extraordinary step of not just defending itself against a patent troll but to aggressively go after them. This is an important battle. Let’s me explain.

The initial reason for Rothschild to come after us they clearly believe that the GNOME Foundation has money and that they can shake us down and get some easy money with their portfolio of patents.

If we had lost or given them the money, it would have made us a mark to not just Rothschild, but to every other patent troll who are probably watching this unfold. Worse, it means that all the other non-profits would be fair game . We do not want to set that precedent. We need to set a strong message that if they attack us they attack us all.

The GNOME Foundation manages infrastructure around the GNOME Project which consists of an incredible amount of software over a nearly 23 year period. This software is used in everything from medical devices, to consumer devices like the Amazon Kindle and Smart TVs, and of course the GNOME desktop.

The GNOME Project provides the tooling, software, and more importantly the maintenance and support for for the community. Bankrupting the GNOME Foundation would mean that these functions would take a terrible blow and cripple the important work we do. The companies that depend on these tools and software will also be similarly hit. That is just one non-profit foundation.

There are many others, Apache, Software Freedom Conservancy and the FSF amongst others. They would be just as vulnerable as we are now.

What Rothschild has done is not just attack GNOME, but all of us in Free Software and Open Source, our toolchains that we depend, and the software we use. We can’t let that happen. We need to strongly repudiate this patent troll, and not only defend ourselves but to neuter them and make an example of them to warn off any other patent troll that thinks we are easy pickings.

Companies, individuals, governments should give money so we can make a singularly statement – not here, not now, not ever!  Let’s set that precedence. Donate to the cause. GNOME has a history of conquering its bullies. But we can’t do that without your help.

An American President once said “They counted on us to be passive. They counted wrong.”

Donate now! 

 

by sri at October 23, 2019 03:59 AM

August 14, 2019

Greg KH

Patch workflow with mutt - 2019

Given that the main development workflow for most kernel maintainers is with email, I spend a lot of time in my email client. For the past few decades I have used (mutt), but every once in a while I look around to see if there is anything else out there that might work better.

One project that looks promising is (aerc) which was started by (Drew DeVault). It is a terminal-based email client written in Go, and relies on a lot of other go libraries to handle a lot of the “grungy” work in dealing with imap clients, email parsing, and other fun things when it comes to free-flow text parsing that emails require.

aerc isn’t in a usable state for me just yet, but Drew asked if I could document exactly how I use an email client for my day-to-day workflow to see what needs to be done to aerc to have me consider switching.

Note, this isn’t a criticism of mutt at all. I love the tool, and spend more time using that userspace program than any other. But as anyone who knows email clients, they all suck, it’s just that mutt sucks less than everything else (that’s literally their motto)

I did a (basic overview of how I apply patches to the stable kernel trees quite a few years ago) but my workflow has evolved over time, so instead of just writing a private email to Drew, I figured it was time to post something showing others just how the sausage really is made.

Anyway, my email workflow can be divided up into 3 different primary things that I do:

  • basic email reading, management, and sorting
  • reviewing new development patches and applying them to a source repository.
  • reviewing potential stable kernel patches and applying them to a source repository.

Given that all stable kernel patches need to already be in Linus’s kernel tree first, the workflow of the how to work with the stable tree is much different from the new patch workflow.

Basic email reading

All of my email ends up in either two “inboxes” on my local machine. One for everything that is sent directly to me (either with To: or Cc:) as well as a number of mailing lists that I ensure I read all messages that are sent to it because I am a maintainer of those subsystems (like (USB), or (stable)). The second inbox consists of other mailing lists that I do not read all messages of, but review as needed, and can be referenced when I need to look something up. Those mailing lists are the “big” linux-kernel mailing list to ensure I have a local copy to search from when I am offline (due to traveling), as well as other “minor” development mailing lists that I like to keep a copy locally like linux-pci, linux-fsdevel, and a few other smaller vger lists.

I get these maildir folders synced with the mail server using (mbsync) which works really well and is much faster than using (offlineimap), which I used for many many years ends up being really slow for when you do not live on the same continent as the mail server. (Luis’s) recent post of switching to mbsync finally pushed me to take the time to configure it all properly and I am glad that I did.

Let’s ignore my “lists” inbox, as that should be able to be read by any email client by just pointing it at it. I do this with a simple alias:

alias muttl='mutt -f ~/mail_linux/'

which allows me to type muttl at any command line to instantly bring it up:

What I spend most of the time in is my “main” mailbox, and that is in a local maildir that gets synced when needed in ~/mail/INBOX/. A simple mutt on the command line brings this up:

Yes, everything just ends up in one place, in handling my mail, I prune relentlessly. Everything ends up in one of 3 states for what I need to do next:

  • not read yet
  • read and left in INBOX as I need to do something “soon” about it
  • read and it is a patch to do something with

Everything that does not require a response, or I’ve already responded to it, gets deleted from the main INBOX at that point in time, or saved into an archive in case I need to refer back to it again (like mailing list messages).

That last state makes me save the message into one of two local maildirs, todo and stable. Everything in todo is a new patch that I need to review, comment on, or apply to a development tree. Everything in stable is something that has to do with patches that need to get applied to the stable kernel tree.

Side note, I have scripts that run frequently that email me any patches that need to be applied to the stable kernel trees, when they hit Linus’s tree. That way I can just live in my email client and have everything that needs to be applied to a stable release in one place.

I sweep my main INBOX ever few hours, and sort things out either quickly responding, deleting, archiving, or saving into the todo or stable directory. I don’t achieve a constant “inbox zero”, but if I only have 40 or so emails in there, I am doing well.

So, for this main workflow, I need an easy way to:

  • filter the INBOX by a pattern so that I only see one “type” of message at a time (more below)
  • read an email
  • write an email
  • respond to existing email, and use vim as an editor as my hands have those key bindings burned into them.
  • delete an email
  • save an email to one of two mboxes with a press of a few keystrokes
  • bulk delete/archive/save emails all at once

These are all tasks that I bet almost everyone needs to do all the time, so a tool like aerc should be able to do that easily.

A note about filtering. As everything comes into one inbox, it is easier to filter that mbox based on things so I can process everything at once.

As an example, I want to read all of the messages sent to the linux-usb mailing list right now, and not see anything else. To do that, in mutt, I press l (limit) which brings up a prompt for a filter to apply to the mbox. This ability to limit messages to one type of thing is really powerful and I use it in many different ways within mutt.

Here’s an example of me just viewing all of the messages that are sent to the linux-usb mailing list, and saving them off after I have read them:

This isn’t that complex, but it has to work quickly and well on mailboxes that are really really big. As an example, here’s me opening my “all lists” mbox and filtering on the linux-api mailing list messages that I have not read yet. It’s really fast as mutt caches lots of information about the mailbox and does not require reading all of the messages each time it starts up to generate its internal structures.

All messages that I want to save to the todo directory I can do with a two keystroke sequence, .t which saves the message there automatically

Again, that’s a binding I set up years ago, , jumps to the specific mbox, and . copies the message to that location.

Now you see why using mutt is not exactly obvious, those bindings are not part of the default configuration and everyone ends up creating their own custom key bindings for whatever they want to do. It takes a good amount of time to figure this out and set things up how you want, but once you are over that learning curve, you can do very complex things easily. Much like an editor (emacs, vim), you can configure them to do complex things easily, but getting to that level can take a lot of time and knowledge. It’s a tool, and if you are going to rely on it, you should spend the time to learn how to use your tools really well.

Hopefully aerc can get to this level of functionality soon. Odds are everyone else does something much like this, as my use-case is not unusual.

Now let’s get to the unusual use cases, the fun things:

Development Patch review and apply

When I decide it’s time to review and apply patches, I do so by subsystem (as I maintain a number of different ones). As all pending patches are in one big maildir, I filter the messages by the subsystem I care about at the moment, and save all of the messages out to a local mbox file that I call s (hey, naming is hard, it gets worse, just wait…)

So, in my linux/work/ local directory, I keep the development trees for different subsystems like usb, char-misc, driver-core, tty, and staging.

Let’s look at how I handle some staging patches.

First, I go into my ~/linux/work/staging/ directory, which I will stay in while doing all of this work. I open the todo mbox with a quick ,t pressed within mutt (a macro I picked from somewhere long ago, I don’t remember where…), and then filter all staging messages, and save them to a local mbox with the following keystrokes:

mutt
,t
l staging
T
s ../s

Yes, I could skip the l staging step, and just do T staging instead of T, but it’s nice to see what I’m going to save off first before doing so:

Now all of those messages are in a local mbox file that I can open with a single keystroke, ’s’ on the command line. That is an alias:

alias s='mutt -f ../s'

I then dig around in that mbox, sort patches by driver type to see everything for that driver at once by filtering on the name and then save those messages to another mbox called ‘s1’ (see, I told you the names got worse.)

s
l erofs
T
s ../s1

I have lots of local mbox files all “intuitively” named ‘s1’, ‘s2’, and ‘s3’. Of course I have aliases to open those files quickly:

alias s1='mutt -f ../s1'
alias s2='mutt -f ../s2'
alias s3='mutt -f ../s3'

I have a number of these mbox files as sometimes I need to filter even further by patch set, or other things, and saving them all to different mboxes makes things go faster.

So, all the erofs patches are in one mbox, let’s open it up and review them, and save the patches that look good enough to apply to another mbox:

Turns out that not all patches need to be dealt with right now (moving erofs out of the staging directory requires other people to review it, so I just save those messages back to the todo mbox:

Now I have a single patch that I want to apply, but I need to add some acks from the maintainers of erofs provided. I do this by editing the “raw” message directly from within mutt. I open the individual messages from the maintainers, cut their reviewed-by line, and then edit the original patch and add those lines to the patch:

Some kernel maintainers right now are screaming something like “Automate this!”, “Patchwork does this for you!”, “Are you crazy?” Yeah, this is one place that I need to work on, but the time involved to do this is not that much and it’s not common that others actually review patches for subsystems I maintain, unfortunately.

The ability to edit a single message directly within my email client is essential. I end up having to fix up changelog text, editing the subject line to be correct, fixing the mail headers to not do foolish things with text formats, and in some cases, editing the patch itself for when it is corrupted or needs to be fixed (I want a Linkedin skill badge for “can edit diff files by hand and have them still work”)

So one hard requirement I have is “editing a raw message from within the email client.” If an email client can not do this, it’s a non-starter for me, sorry.

So we now have a single patch that needs to be applied to the tree. I am already in the ~/linux/work/staging/ directory, and on the correct git branch for where this patch needs to go (how I handle branches and how patches move between them deserve a totally different blog post…)

I can apply this patch in one of two different ways, using git am -s ../s1 on the command line, piping the whole mbox into git and applying the patches directly, or I can apply them within mutt individually by using a macro.

When I have a lot of patches to apply, I just pipe the mbox file to git am -s as I’m comfortable with that, and it goes quick for multiple patches. It also works well as I have lots of different terminal windows open in the same directory when doing this and I can quickly toggle between them.

But we are talking about email clients at the moment, so here’s me applying a single patch to the local git tree:

All it took was hitting the L key. That key is set up as a macro in my mutt configuration file with a single line:

macro index L '| git am -s'\n

This macro pipes the output of the current message to git am -s.

The ability of mutt to pipe the current message (or messages) to external scripts is essential for my workflow in a number of different places. Not having to leave the email client but being able to run something else with that message, is a very powerful functionality, and again, a hard requirement for me.

So that’s it for applying development patches. It’s a bunch of the same tasks over and over:

  • collect patches by a common theme
  • filter the patches by a smaller subset
  • review them manually and responding if there are problems
  • saving “good” patches off to apply
  • applying the good patches
  • jump back to the first step

Doing that all within the email program and being able to quickly get in, and out of the program, as well as do work directly from the email program, is key.

Of course I do a “test build and sometimes test boot and then push git trees and notify author that the patch is applied” set of steps when applying patches too, but those are outside of my email client workflow and happen in a separate terminal window.

Stable patch review and apply

The process of reviewing patches for the stable tree is much like the development patch process, but it differs in that I never use ‘git am’ for applying anything.

The stable kernel tree, while under development, is kept as a series of patches that need to be applied to the previous release. This series of patches is maintained by using a tool called (quilt). Quilt is very powerful and handles sets of patches that need to be applied on top of a moving base very easily. The tool was based on a crazy set of shell scripts written by Andrew Morton a long time ago, and is currently maintained by Jean Delvare and has been rewritten in perl to make them more maintainable. It handles thousands of patches easily and quickly and is used by many developers to handle kernel patches for distributions as well as other projects.

I highly recommend it as it allows you to reorder, drop, add in the middle of the series, and manipulate patches in all sorts of ways, as well as create new patches directly. I do this for the stable tree as lots of times we end up dropping patches from the middle of the series when reviewers say they should not be applied, adding new patches where needed as prerequisites of existing patches, and other changes that with git, would require lots of rebasing.

Rebasing a git does not work for when you have developers working “down” from your tree. We usually have the rule with kernel development that if you have a public tree, it never gets rebased otherwise no one can use it for development.

Anyway, the stable patches are kept in a quilt series in a repository that is kept under version control in git (complex, yeah, sorry.) That queue can always be found (here).

I do create a linux-stable-rc git tree that is constantly rebased based on the stable queue for those who run test systems that can not handle quilt patches. That tree is found (here) and should not ever be used by anyone for anything other than automated testing. See (this email for a bit more explanation of how these git trees should, and should not, be used.

With all that background information behind us, let’s look at how I take patches that are in Linus’s tree, and apply them to the current stable kernel queues:

First I open the stable mbox. Then I filter by everything that has upstream in the subject line. Then I filter again by alsa to only look at the alsa patches. I look at the individual patches, looking at the patch to verify that it really is something that should be applied to the stable tree and determine what order to apply the patches in based on the date of the original commit.

I then hit F to pipe the message to a script that looks up the Fixes: tag in the message to determine what stable tree, if any, the commit that this fix was contained in.

In this example, the patch only should go back to the 4.19 kernel tree, so when I apply it, I know to stop at that place and not go further.

To apply the patch, I hit A which is another macro that I define in my mutt configuration

macro index A |'~/linux/stable/apply_it_from_email'\n
macro pager A |'~/linux/stable/apply_it_from_email'\n

It is defined “twice” as you can have different key bindings when you are looking at mailbox’s index of all messages from when you are looking at the contents of a single message.

In both cases, I pipe the whole email message to my apply_it_from_email script.

That script digs through the message, finds the git commit id of the patch in Linus’s tree, then runs a different script that takes the commit id, exports the patch associated with that id, edits the message to add my signed-off-by to the patch as well as dropping me into my editor to make any needed tweaks that might be needed (sometimes files get renamed so I have to do that by hand, and it gives me one final change to review the patch in my editor which is usually easier than in the email client directly as I have better syntax highlighting and can search and review the text better.

If all goes well, I save the file and the script continues and applies the patch to a bunch of stable kernel trees, one after another, adding the patch to the quilt series for that specific kernel version. To do all of this I had to spawn a separate terminal window as mutt does fun things to standard input/output when piping messages to a script, and I couldn’t ever figure out how to do this all without doing the extra spawn process.

Here it is in action, as a video as (asciinema) can’t show multiple windows at the same time.

Once I have applied the patch, I save it away as I might need to refer to it again, and I move on to the next one.

This sounds like a lot of different steps, but I can process a lot of these relatively quickly. The patch review step is the slowest one here, as that of course can not be automated.

I later take those new patches that have been applied and run kernel build tests and other things before sending out emails saying they have been applied to the tree. But like with development patches, that happens outside of my email client workflow.

Bonus, sending email from the command line

In writing this up, I remembered that I do have some scripts that use mutt to send email out. I don’t normally use mutt for this for patch reviews, as I use other scripts for that (ones that eventually got turned into git send-email), so it’s not a hard requirement, but it is nice to be able to do a simple:

mutt -s "${subject}" "${address}" <  ${msg} >> error.log 2>&1

from within a script when needed.

Thunderbird also can do this, I have used:

thunderbird --compose "to='${address}',subject='${subject}',message=${msg}"

at times in the past when dealing with email servers that mutt can not connect to easily (i.e. gmail when using oauth tokens).

Summary of what I need from an email client

So, to summarize it all for Drew, here’s my list of requirements for me to be able to use an email client for kernel maintainership roles:

  • work with local mbox and maildir folders easily
  • open huge mbox and maildir folders quickly.
  • custom key bindings for any command. Defaults that are sane is always good, but everyone is used to a previous program and training fingers can be hard.
  • create new key bindings for common tasks (like save a message to a specific mbox)
  • easily filter messages based on various things. Full regexes are not needed, see the PATTERNS section of ‘man muttrc’ for examples of what people have come up with over the years as being needed by an email client.
  • when sending/responding to an email, bring it up in the editor of my choice, with full headers. I know aerc already uses vim for this, which is great as that makes it easy to send patches or include other files directly in an email body
  • edit a message directly from the email client and then save it back to the local mbox it came from
  • pipe the current message to an external program

That’s what I use for kernel development.

Oh, I forgot:

  • handle gpg encrypted email. Some mailing lists I am on send everything encrypted with a mailing list key which is needed to both decrypt the message and to encrypt messages sent to the list. SMIME could be used if GPG can’t work, but both of them are probably equally horrid to code support for, so I recommend GPG as that’s probably used more often.

Bonus things that I have grown to rely on when using mutt is:

  • handle displaying html email by piping to an external tool like w3m
  • send a message from the command line with a specific subject and a specific attchment if needed.
  • specify the configuration file to use as a command line option. It is usually easier to have one configuration file for a “work” account, and another one for a “personal” one, with different email servers and settings provided for both.
  • configuration files that can include other configuration files. Mutt allows me to keep all of my “core” keybindings in one config file and just specific email server options in separate config files, allowing me to make configuration management easier.

If you have made it this far, and you aren’t writing an email client, that’s amazing, it must be a slow news day, which is a good thing. I hope this writeup helps others to show them how mutt can be used to handle developer workflows easily, and what is required of a good email client in order to be able to do all of this.

Hopefully other email clients can get to state where they too can do all of this. Competition is good and maybe aerc can get there someday.

August 14, 2019 12:37 PM

June 15, 2019

Greg KH

Linux stable tree mirror at github

As everyone seems to like to put kernel trees up on github for random projects (based on the crazy notifications I get all the time), I figured it was time to put up a semi-official mirror of all of the stable kernel releases on github.com

It can be found at: https://github.com/gregkh/linux and I will try to keep it up to date with the real source of all kernel stable releases at https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/

It differs from Linus’s tree at: https://github.com/torvalds/linux in that it contains all of the different stable tree branches and stable releases and tags, which many devices end up building on top of.

So, mirror away!

Also note, this is a read-only mirror, any pull requests created on it will be gleefully ignored, just like happens on Linus’s github mirror.

If people think this is needed on any other git hosting site, just let me know and I will be glad to push to other places as well.

This notification was also cross-posted on the new http://people.kernel.org/ site, go follow that for more kernel developer stuff.

June 15, 2019 08:10 PM

April 06, 2009

Darrick Wong

February 25, 2014

Brandon Philips

Slides: etcd at Go PDX

Last week I gave a talk at the PDX Go meetup (Go PDX). The presentation is a refinement on the talk I gave last month at GoSF but contains mostly the same content.

Several people in the audience had some experience with etcd already so it was great to hear their feedback on the project as a whole. The questions included partition tolerance and scaling properties, use cases and general design. It was a smart crowd and it was great to meet so many PDX Gophers.

Resources

etcd:

Raft:

by Brandon Philips at February 25, 2014 12:00 AM

February 16, 2014

Brandon Philips

Getting to Goven

This is the step by step story of how etcd, a project written in Go, arrived at using goven for library dependency management. It went through several evolutionary steps while trying to find a good solution to these basic goals:

  • Reproducible builds: given the same git hash and version of the Go compiler we wanted an identical binary everytime.
  • Zero dependencies: developers should be able to fork on github, make a change, build, test and send a PR without having anything more than a working Go compiler installed.
  • Cross platform: compile and run on OSX, Linux and Windows. Bonus points for cross-compilation.

Checked in GOPATH

Initially, to get reproducible builds and zero dependencies we checked in a copy of the GOPATH to “third_party/src”. Over time we encountered several problems:

  1. “go get github.com/coreos/etcd” was broken since downstream dependencies would change master and “go get” would setup a GOPATH that looked different than our checked in version.
  2. Windows developers had to have a working bash. Soon we had to maintain a copy of our build script written in Powershell.

At the time I felt that “go get” was an invalid use case since etcd was just a project built in Go and “go get” is primarliy useful for easily grabbing libraries when you are hacking on something. However, there was mounting user requests for a “go gettable” version of etcd.

To solve the Windows problem I wrote a script called “third_party.go” which ported the GOPATH management tools and the shell version of the “build” script to Go.

third_party.go

third_party.go worked well for a few weeks and we could remove the duplicate build logic in the Powershell scripts. The basic usage of was simple:

# Bump the raft dependency in the custom GOPATH
go run third_party.go bump github.com/coreos/go-etcd
# Use third_party.go to set GOPATH to third_party/src and build
go run third_party.go build github.com/coreos/etcd

But, there was a fatal flaw with this setup: it broke cross compilation via GOOS and GOARCH.

GOOS=linux go run third_party.go build github.com/coreos/etcd
fork/exec /var/folders/nq/jrsys0j926z9q3cjp1yfbhqr0000gn/T/go-build584136562/command-line-arguments/_obj/exe/third_party: exec format error

The reason is that GOOS and GOARCH get used internally by “go run`. Meaning it literally tries to build “third_party.go” as a Linux binary and runs it. Running a Linux binary on a OSX machine doesn’t work.

This soultion didn’t get us any closer to being “go gettable” either. There were several inquiries per week for this. So, I started looking around for better solutions and eventually settled on goven.

goven and goven-bump

goven achieves all of the desirable traits: reproducible builds, zero dependencies to start developing, cross compilation, and as a bonus “go install github.com/coreos/etcd” works.

The basic theory of operation is it checks all dependencies into subpackages of your project. Instead of importing “code.google.com/p/goprotobuf” you import github.com/coreos/etcd/third_party/code.google.com/p/goprotobuf. It makes the imports uglier but it is automated by goven.

Along the way I wrote some helper tools to assist in bumping dependencies which can be found on Github at philips/goven-bump. The scripts `goven-bump” and “goven-bump-commit” grab the hg revision or git hash of the dependency along with running goven. This makes bumping a dependency and getting a basic commit message as easy as:

cd ${GOPATH}/github.com/coreos/etcd
goven-bump-commit code.google.com/p/goprotobuf
git commit -m 'bump(code.google.com/p/goprotobuf): 074202958b0a25b4d1e194fb8defe5d69c300774'

goven and introduces some additional complexity for the maintainers of the project. But, the simplicity it presents to regular contributors and users used to “go get” make it worth the additional effort.

by Brandon Philips at February 16, 2014 12:00 AM

September 03, 2009

Valerie Aurora

Carbon METRIC BUTTLOAD print

I just read Charlie Stross's rant on reducing his household's carbon footprint. Summary: He and his wife can live a life of monastic discomfort, wearing moldy scratchy 10-year-old bamboo fiber jumpsuits and shivering in their flat - or, they can cut out one transatlantic flight per year and achieve the equivalent carbon footprint reduction.

I did a similar analysis back around 2007 or so and had the same result: I've got a relatively trim carbon footprint compared to your average first-worlder, except for the air travel that turns it into a bloated planet-eating monster too extreme to fall under the delicate term "footprint." Like Charlie, I am too practical, too technophilic, and too hopeful to accept that the only hope of saving the planet is to regress to third world living standards (fucking eco-ascetics!). I decided that I would only make changes that made my life better, not worse - e.g., living in a walkable urban center (downtown Portland, now SF). But the air travel was a stumper. I liked traveling, and flying around the world for conferences is a vital component of saving the world through open source. Isn't it? Isn't it?

Two things happened that made me re-evaluate my air travel philosophy. One, I started a file systems consulting business and didn't have a lot of spare cash to spend on fripperies. Two, I hurt my back and sitting became massively uncomfortable (still recovering from that one). So I cut down on the flying around the world to Linux conferences involuntarily.

You know what I discovered? I LOVE not flying around the world for Linux conferences. I love taking only a few flights a year. I love flying mostly in the same time zone (yay, West coast). I love having the energy to travel for fun because I'm not all dragged out by the conference circuit. I love hanging out with my friends who live in the same city instead of missing out on all the parties because I'm in fucking Venezuela instead.

Save the planet. Burn your frequent flyer card.

September 03, 2009 07:04 AM

March 04, 2013

Twitter

March 01, 2013

Twitter

February 18, 2009

Stephen Hemminger

Parallelizing netfilter

The Linux networking receive performance has been mostly single threaded until the advent of MSI-X and multiqueue receive hardware. Now with many cards, it is possible to be processing packets on multiple CPU's and cores at once. All this is great, and improves performance for the simple case.

But most users don't just use simple networking. They use useful features like netfilter to do firewalling, NAT, connection tracking and all other forms of wierd and wonderful things. The netfilter code has been tuned over the years, but there are still several hot locks in the receive path. Most of these are reader-writer locks which are actually the worst kind, much worse than a simple spin lock. The problem with locks on modern CPU's is that even for the uncontested case, a lock operation means a full-stop cache miss.

With the help of Eric Duzmet, Rick Jones, Martin Josefsson and others, it looks like there is a solution to most of these. I am excited to see how it all pans out but it could mean a big performance increase for any kind of netfilter packet intensive processing. Stay tuned.

by Linux Network Plumber (noreply@blogger.com) at February 18, 2009 05:51 AM

September 25, 2010

Andy Grover

Plumbers Down Under

<p>Since the original <a href="http://www.linuxplumbersconf.org/">Linux Plumbers Conference</a> drew much inspiration from <a href="http://lca2011.linux.org.au/">LCA</a>'s continuing success, it's cool to see some of what Plumbers has done be seen as <a href="http://airlied.livejournal.com/73491.html">worthy of emulating at next year's LCA</a>!</p><p>LCA seems like a great opportunity to specifically try to make progress on cross-project issues. It's quite well-attended so it's likely the people you need in the room to make a decision will be <em>in the room</em>.</p>

by andy.grover at September 25, 2010 01:50 PM

September 10, 2010

Andy Grover

Increasing office presence for remote workers

<p>I work from home. My basement, actually. I recently read an article in the Times about <a href="http://www.nytimes.com/2010/09/05/science/05robots.html?_r=1&amp;pagewanted=1">increasing the office presence of remote employees with robots</a>. Pretty interesting. How much does one of those robo-Beltzners cost? $5k? This is a neat idea but it's still not released so who knows.<br /><br />I've been thinking about other options for establishing a stronger office presence for myself. Recently I bought a webcam. If I used this to broadcast me, sitting at my desk on Ustream or Livestream, that would certainly make it so my coworkers (and the rest of the world) could see what I was up to, every second of the workday. This is actually a lot <i>more</i> exposure than an office worker, even in a cubicle, would expect. If I'm in an office cube, I might have people stop by, but I'll know they're there, and they won't <i>always</i> be there.&nbsp; There is still generally solitude and privacy to concentrate on the code and be productive. I'm currently trying something that I think is closer to the balance of a real office:<br /><ul><li>Take snapshots from webcam every 15 minutes<br /></li><li>Only during normal working hours</li><li>Give 3 second audible warning before capturing</li><li>Upload to an intranet webserver</li></ul>I haven't found this to be too much of an imposition -- in fact, the quarter-hourly beeps are somewhat like a clock chime.<br /><br />In the beginning, it's hard to resist mugging for the camera, but that passes:<br /><img style="max-width: 800px;" src="http://oss.oracle.com/%7Eagrover/pics/blog/whassup.jpg" alt="whassup???" height="240" width="320" /><br />Think about how this is better than irc or IM, both of which <i>do</i> have activity/presence indicators, but which either aren't used, or poorly implemented and often wrong. How much more likely are you, as a colleague of mine, to IM, email, video chat, or call me if you can see I'm at my desk and working? No more "around?" messages needed. You could even see if I'm looking cheerful, or perhaps otherwise indisposed, heh heh:<br /><img style="max-width: 800px;" src="http://oss.oracle.com/%7Eagrover/pics/blog/cat1.jpg" alt="hello kitty" height="240" width="320" /><br />On a technical note, although there were many Debian packages that kind-of did what I wanted, it turned out to be surprisingly easy to roll my own in about <a href="http://github.com/agrover/pysnapper/blob/master/webcam.py">20 lines of Python</a>.<br /><img style="max-width: 800px;" src="http://oss.oracle.com/%7Eagrover/pics/blog/working.jpg" alt="working hard." height="240" width="320" /><br />Anyways, just something I've been playing around with, while I wait for my robo-avatar to be set up down at HQ...</p>

by andy.grover at September 10, 2010 05:20 PM

November 08, 2009

Valerie Aurora

Migrated to WordPress

My LiveJournal blog name - valhenson - was the last major holdover from my old name, Val Henson. I got a new Social Security card, passport, and driver's license with my new name several months ago, but migrating my blog? That's hard! Or something. I finally got around to moving to a brand-spanking-new blog at WordPress:

Valerie Aurora's blog

Update your RSS reader with the above if you still want to read my blog - I won't be republishing my posts to my new blog on this LiveJournal blog.

If you're aware of any other current instances of "Val Henson" or "Valerie Henson," let me know! I obviously can't change my name on historical documents, like research papers or interviews, but if it's vaguely real-time-ish, I'd like to update it.

One web page I'm going to keep as Val Henson for historical reasons is my Val Henson is a Man joke. Several of the pages on my web site were created after the fact as vehicles for amusing pictures or graphics I had lying around. In this case, my friend Dana Sibera created a pretty damn cool picture of me with a full beard and I had to do something with it.



It's doubly wild now that I have such short hair.

November 08, 2009 11:36 PM