Planet Linux Plumbers Conf

June 03, 2009

Darrick Wong

Picspam!

/me rounded up a bunch of (old) panoramas and put them into the high-definition panorama viewer. Be sure to check out the (huge spike in memory cache when you load the) panorama previewer (click the "See All" button).

June 03, 2009 02:27 AM

August 27, 2008

Stephen Hemminger

Exploring transactional filesystems

In order to implement router style semantics, Vyatta allows setting many different configuration variables and then applying them all at once with a commit command. Currently, this is implemented by a combination of shell magic and unionfs. The problem is that keeping unionfs up to date and fixing the resulting crashes is major pain.

There must be better alternatives, current options include:
  • Replace unionfs with aufs which has less users yelling at it and more developers.
  • Use a filesystem like btrfs which has snapshots. This changes the model and makes api's like "what changed?" hard to implement.
  • Move to a pure userspace model using git. The problem here is that git as currently written is meant for users not transactions.
  • Use combination of copy, bind mount, and rsync.
  • Use a database for configuration. This is easier for general queries but is the most work. Conversion from existing format would be a pain.
Looks like a fun/hard problem. Don't expect any resolution soon.

by Linux Network Plumber (noreply@blogger.com) at August 27, 2008 10:20 PM

October 29, 2019

Paul E. McKenney

The Old Man and His Smartphone

I recently started using my very first smartphone, and it was suggested that I blog about the resulting experiences.  So here you go!

by paulmck at October 29, 2019 01:57 PM

The Old Man and His Smartphone, Episode VII

The previous episode speculated about the past, so this episode will make some wild guesses about the future.

There has been much hue and cry about the ill effects of people being glued to their smartphones.  I have tended to discount this viewpoint due to having seen a great many people's heads buried in newspapers, magazines, books, and television screens back in the day.  And yes, there was much hue and cry about that as well, so I guess some things never change.

However, a few years back, the usual insanely improbable sequence of events resulted in me eating dinner with the Chief of Police of a mid-sized but prominent city, both of which will go nameless.  He called out increased smartphone use as having required him to revamp his training programs.  You see, back in the day, typical recruits could reasonably be expected to have the social skills required to defuse a tense situation, using what he termed "verbal jiujitsu".  However, present-day recruits need to take actual classes in order to master this lost art.

I hope that we can all agree that it is far better for officers of the law to maintain order through use of vocal means, perhaps augmented with force of personality, especially given that the alternative seems to the use of violence.  So perhaps the smartphone is responsible for some significant social change after all.  Me, I will leave actual judgment on this topic to psychologists, social scientists, and of course historians.  Not that any of them are likely to reach a conclusion that I would trust.  Based on past experience, far from it!  The benefit of leaving such judgments to them is instead that it avoids me wasting any further time on such judgments.  Or so I hope.

It is of course all too easy to be extremely gloomy about the overall social impact of smartphones.  One could easily argue that people freely choose spreading misinformation over accessing vast stores of information, bad behavior over sweetness and light, and so on and so forth.

But it really is up to each and every one of us.  After all, if life were easy, I just might do a better job of living mine.  So maybe we all need to brush up on our social skills.  And to do a better job of choosing what to post, to say nothing of what posts to pass on.  Perhaps including the blog posts in this series!

Cue vigorous arguments on the appropriateness of these goals, or, failing that, the best ways to accomplish them.  ;-)

by paulmck at October 29, 2019 01:50 PM

October 23, 2019

Sri Ramkrishna

Let’s fight back against patent trolls

The GNOME Foundation has taken the extraordinary step of not just defending itself against a patent troll but to aggressively go after them. This is an important battle. Let’s me explain.

The initial reason for Rothschild to come after us they clearly believe that the GNOME Foundation has money and that they can shake us down and get some easy money with their portfolio of patents.

If we had lost or given them the money, it would have made us a mark to not just Rothschild, but to every other patent troll who are probably watching this unfold. Worse, it means that all the other non-profits would be fair game . We do not want to set that precedent. We need to set a strong message that if they attack us they attack us all.

The GNOME Foundation manages infrastructure around the GNOME Project which consists of an incredible amount of software over a nearly 23 year period. This software is used in everything from medical devices, to consumer devices like the Amazon Kindle and Smart TVs, and of course the GNOME desktop.

The GNOME Project provides the tooling, software, and more importantly the maintenance and support for for the community. Bankrupting the GNOME Foundation would mean that these functions would take a terrible blow and cripple the important work we do. The companies that depend on these tools and software will also be similarly hit. That is just one non-profit foundation.

There are many others, Apache, Software Freedom Conservancy and the FSF amongst others. They would be just as vulnerable as we are now.

What Rothschild has done is not just attack GNOME, but all of us in Free Software and Open Source, our toolchains that we depend, and the software we use. We can’t let that happen. We need to strongly repudiate this patent troll, and not only defend ourselves but to neuter them and make an example of them to warn off any other patent troll that thinks we are easy pickings.

Companies, individuals, governments should give money so we can make a singularly statement – not here, not now, not ever!  Let’s set that precedence. Donate to the cause. GNOME has a history of conquering its bullies. But we can’t do that without your help.

An American President once said “They counted on us to be passive. They counted wrong.”

Donate now! 

 

by sri at October 23, 2019 03:59 AM

August 14, 2019

Greg KH

Patch workflow with mutt - 2019

Given that the main development workflow for most kernel maintainers is with email, I spend a lot of time in my email client. For the past few decades I have used (mutt), but every once in a while I look around to see if there is anything else out there that might work better.

One project that looks promising is (aerc) which was started by (Drew DeVault). It is a terminal-based email client written in Go, and relies on a lot of other go libraries to handle a lot of the “grungy” work in dealing with imap clients, email parsing, and other fun things when it comes to free-flow text parsing that emails require.

aerc isn’t in a usable state for me just yet, but Drew asked if I could document exactly how I use an email client for my day-to-day workflow to see what needs to be done to aerc to have me consider switching.

Note, this isn’t a criticism of mutt at all. I love the tool, and spend more time using that userspace program than any other. But as anyone who knows email clients, they all suck, it’s just that mutt sucks less than everything else (that’s literally their motto)

I did a (basic overview of how I apply patches to the stable kernel trees quite a few years ago) but my workflow has evolved over time, so instead of just writing a private email to Drew, I figured it was time to post something showing others just how the sausage really is made.

Anyway, my email workflow can be divided up into 3 different primary things that I do:

  • basic email reading, management, and sorting
  • reviewing new development patches and applying them to a source repository.
  • reviewing potential stable kernel patches and applying them to a source repository.

Given that all stable kernel patches need to already be in Linus’s kernel tree first, the workflow of the how to work with the stable tree is much different from the new patch workflow.

Basic email reading

All of my email ends up in either two “inboxes” on my local machine. One for everything that is sent directly to me (either with To: or Cc:) as well as a number of mailing lists that I ensure I read all messages that are sent to it because I am a maintainer of those subsystems (like (USB), or (stable)). The second inbox consists of other mailing lists that I do not read all messages of, but review as needed, and can be referenced when I need to look something up. Those mailing lists are the “big” linux-kernel mailing list to ensure I have a local copy to search from when I am offline (due to traveling), as well as other “minor” development mailing lists that I like to keep a copy locally like linux-pci, linux-fsdevel, and a few other smaller vger lists.

I get these maildir folders synced with the mail server using (mbsync) which works really well and is much faster than using (offlineimap), which I used for many many years ends up being really slow for when you do not live on the same continent as the mail server. (Luis’s) recent post of switching to mbsync finally pushed me to take the time to configure it all properly and I am glad that I did.

Let’s ignore my “lists” inbox, as that should be able to be read by any email client by just pointing it at it. I do this with a simple alias:

alias muttl='mutt -f ~/mail_linux/'

which allows me to type muttl at any command line to instantly bring it up:

What I spend most of the time in is my “main” mailbox, and that is in a local maildir that gets synced when needed in ~/mail/INBOX/. A simple mutt on the command line brings this up:

Yes, everything just ends up in one place, in handling my mail, I prune relentlessly. Everything ends up in one of 3 states for what I need to do next:

  • not read yet
  • read and left in INBOX as I need to do something “soon” about it
  • read and it is a patch to do something with

Everything that does not require a response, or I’ve already responded to it, gets deleted from the main INBOX at that point in time, or saved into an archive in case I need to refer back to it again (like mailing list messages).

That last state makes me save the message into one of two local maildirs, todo and stable. Everything in todo is a new patch that I need to review, comment on, or apply to a development tree. Everything in stable is something that has to do with patches that need to get applied to the stable kernel tree.

Side note, I have scripts that run frequently that email me any patches that need to be applied to the stable kernel trees, when they hit Linus’s tree. That way I can just live in my email client and have everything that needs to be applied to a stable release in one place.

I sweep my main INBOX ever few hours, and sort things out either quickly responding, deleting, archiving, or saving into the todo or stable directory. I don’t achieve a constant “inbox zero”, but if I only have 40 or so emails in there, I am doing well.

So, for this main workflow, I need an easy way to:

  • filter the INBOX by a pattern so that I only see one “type” of message at a time (more below)
  • read an email
  • write an email
  • respond to existing email, and use vim as an editor as my hands have those key bindings burned into them.
  • delete an email
  • save an email to one of two mboxes with a press of a few keystrokes
  • bulk delete/archive/save emails all at once

These are all tasks that I bet almost everyone needs to do all the time, so a tool like aerc should be able to do that easily.

A note about filtering. As everything comes into one inbox, it is easier to filter that mbox based on things so I can process everything at once.

As an example, I want to read all of the messages sent to the linux-usb mailing list right now, and not see anything else. To do that, in mutt, I press l (limit) which brings up a prompt for a filter to apply to the mbox. This ability to limit messages to one type of thing is really powerful and I use it in many different ways within mutt.

Here’s an example of me just viewing all of the messages that are sent to the linux-usb mailing list, and saving them off after I have read them:

This isn’t that complex, but it has to work quickly and well on mailboxes that are really really big. As an example, here’s me opening my “all lists” mbox and filtering on the linux-api mailing list messages that I have not read yet. It’s really fast as mutt caches lots of information about the mailbox and does not require reading all of the messages each time it starts up to generate its internal structures.

All messages that I want to save to the todo directory I can do with a two keystroke sequence, .t which saves the message there automatically

Again, that’s a binding I set up years ago, , jumps to the specific mbox, and . copies the message to that location.

Now you see why using mutt is not exactly obvious, those bindings are not part of the default configuration and everyone ends up creating their own custom key bindings for whatever they want to do. It takes a good amount of time to figure this out and set things up how you want, but once you are over that learning curve, you can do very complex things easily. Much like an editor (emacs, vim), you can configure them to do complex things easily, but getting to that level can take a lot of time and knowledge. It’s a tool, and if you are going to rely on it, you should spend the time to learn how to use your tools really well.

Hopefully aerc can get to this level of functionality soon. Odds are everyone else does something much like this, as my use-case is not unusual.

Now let’s get to the unusual use cases, the fun things:

Development Patch review and apply

When I decide it’s time to review and apply patches, I do so by subsystem (as I maintain a number of different ones). As all pending patches are in one big maildir, I filter the messages by the subsystem I care about at the moment, and save all of the messages out to a local mbox file that I call s (hey, naming is hard, it gets worse, just wait…)

So, in my linux/work/ local directory, I keep the development trees for different subsystems like usb, char-misc, driver-core, tty, and staging.

Let’s look at how I handle some staging patches.

First, I go into my ~/linux/work/staging/ directory, which I will stay in while doing all of this work. I open the todo mbox with a quick ,t pressed within mutt (a macro I picked from somewhere long ago, I don’t remember where…), and then filter all staging messages, and save them to a local mbox with the following keystrokes:

mutt
,t
l staging
T
s ../s

Yes, I could skip the l staging step, and just do T staging instead of T, but it’s nice to see what I’m going to save off first before doing so:

Now all of those messages are in a local mbox file that I can open with a single keystroke, ’s’ on the command line. That is an alias:

alias s='mutt -f ../s'

I then dig around in that mbox, sort patches by driver type to see everything for that driver at once by filtering on the name and then save those messages to another mbox called ‘s1’ (see, I told you the names got worse.)

s
l erofs
T
s ../s1

I have lots of local mbox files all “intuitively” named ‘s1’, ‘s2’, and ‘s3’. Of course I have aliases to open those files quickly:

alias s1='mutt -f ../s1'
alias s2='mutt -f ../s2'
alias s3='mutt -f ../s3'

I have a number of these mbox files as sometimes I need to filter even further by patch set, or other things, and saving them all to different mboxes makes things go faster.

So, all the erofs patches are in one mbox, let’s open it up and review them, and save the patches that look good enough to apply to another mbox:

Turns out that not all patches need to be dealt with right now (moving erofs out of the staging directory requires other people to review it, so I just save those messages back to the todo mbox:

Now I have a single patch that I want to apply, but I need to add some acks from the maintainers of erofs provided. I do this by editing the “raw” message directly from within mutt. I open the individual messages from the maintainers, cut their reviewed-by line, and then edit the original patch and add those lines to the patch:

Some kernel maintainers right now are screaming something like “Automate this!”, “Patchwork does this for you!”, “Are you crazy?” Yeah, this is one place that I need to work on, but the time involved to do this is not that much and it’s not common that others actually review patches for subsystems I maintain, unfortunately.

The ability to edit a single message directly within my email client is essential. I end up having to fix up changelog text, editing the subject line to be correct, fixing the mail headers to not do foolish things with text formats, and in some cases, editing the patch itself for when it is corrupted or needs to be fixed (I want a Linkedin skill badge for “can edit diff files by hand and have them still work”)

So one hard requirement I have is “editing a raw message from within the email client.” If an email client can not do this, it’s a non-starter for me, sorry.

So we now have a single patch that needs to be applied to the tree. I am already in the ~/linux/work/staging/ directory, and on the correct git branch for where this patch needs to go (how I handle branches and how patches move between them deserve a totally different blog post…)

I can apply this patch in one of two different ways, using git am -s ../s1 on the command line, piping the whole mbox into git and applying the patches directly, or I can apply them within mutt individually by using a macro.

When I have a lot of patches to apply, I just pipe the mbox file to git am -s as I’m comfortable with that, and it goes quick for multiple patches. It also works well as I have lots of different terminal windows open in the same directory when doing this and I can quickly toggle between them.

But we are talking about email clients at the moment, so here’s me applying a single patch to the local git tree:

All it took was hitting the L key. That key is set up as a macro in my mutt configuration file with a single line:

macro index L '| git am -s'\n

This macro pipes the output of the current message to git am -s.

The ability of mutt to pipe the current message (or messages) to external scripts is essential for my workflow in a number of different places. Not having to leave the email client but being able to run something else with that message, is a very powerful functionality, and again, a hard requirement for me.

So that’s it for applying development patches. It’s a bunch of the same tasks over and over:

  • collect patches by a common theme
  • filter the patches by a smaller subset
  • review them manually and responding if there are problems
  • saving “good” patches off to apply
  • applying the good patches
  • jump back to the first step

Doing that all within the email program and being able to quickly get in, and out of the program, as well as do work directly from the email program, is key.

Of course I do a “test build and sometimes test boot and then push git trees and notify author that the patch is applied” set of steps when applying patches too, but those are outside of my email client workflow and happen in a separate terminal window.

Stable patch review and apply

The process of reviewing patches for the stable tree is much like the development patch process, but it differs in that I never use ‘git am’ for applying anything.

The stable kernel tree, while under development, is kept as a series of patches that need to be applied to the previous release. This series of patches is maintained by using a tool called (quilt). Quilt is very powerful and handles sets of patches that need to be applied on top of a moving base very easily. The tool was based on a crazy set of shell scripts written by Andrew Morton a long time ago, and is currently maintained by Jean Delvare and has been rewritten in perl to make them more maintainable. It handles thousands of patches easily and quickly and is used by many developers to handle kernel patches for distributions as well as other projects.

I highly recommend it as it allows you to reorder, drop, add in the middle of the series, and manipulate patches in all sorts of ways, as well as create new patches directly. I do this for the stable tree as lots of times we end up dropping patches from the middle of the series when reviewers say they should not be applied, adding new patches where needed as prerequisites of existing patches, and other changes that with git, would require lots of rebasing.

Rebasing a git does not work for when you have developers working “down” from your tree. We usually have the rule with kernel development that if you have a public tree, it never gets rebased otherwise no one can use it for development.

Anyway, the stable patches are kept in a quilt series in a repository that is kept under version control in git (complex, yeah, sorry.) That queue can always be found (here).

I do create a linux-stable-rc git tree that is constantly rebased based on the stable queue for those who run test systems that can not handle quilt patches. That tree is found (here) and should not ever be used by anyone for anything other than automated testing. See (this email for a bit more explanation of how these git trees should, and should not, be used.

With all that background information behind us, let’s look at how I take patches that are in Linus’s tree, and apply them to the current stable kernel queues:

First I open the stable mbox. Then I filter by everything that has upstream in the subject line. Then I filter again by alsa to only look at the alsa patches. I look at the individual patches, looking at the patch to verify that it really is something that should be applied to the stable tree and determine what order to apply the patches in based on the date of the original commit.

I then hit F to pipe the message to a script that looks up the Fixes: tag in the message to determine what stable tree, if any, the commit that this fix was contained in.

In this example, the patch only should go back to the 4.19 kernel tree, so when I apply it, I know to stop at that place and not go further.

To apply the patch, I hit A which is another macro that I define in my mutt configuration

macro index A |'~/linux/stable/apply_it_from_email'\n
macro pager A |'~/linux/stable/apply_it_from_email'\n

It is defined “twice” as you can have different key bindings when you are looking at mailbox’s index of all messages from when you are looking at the contents of a single message.

In both cases, I pipe the whole email message to my apply_it_from_email script.

That script digs through the message, finds the git commit id of the patch in Linus’s tree, then runs a different script that takes the commit id, exports the patch associated with that id, edits the message to add my signed-off-by to the patch as well as dropping me into my editor to make any needed tweaks that might be needed (sometimes files get renamed so I have to do that by hand, and it gives me one final change to review the patch in my editor which is usually easier than in the email client directly as I have better syntax highlighting and can search and review the text better.

If all goes well, I save the file and the script continues and applies the patch to a bunch of stable kernel trees, one after another, adding the patch to the quilt series for that specific kernel version. To do all of this I had to spawn a separate terminal window as mutt does fun things to standard input/output when piping messages to a script, and I couldn’t ever figure out how to do this all without doing the extra spawn process.

Here it is in action, as a video as (asciinema) can’t show multiple windows at the same time.

Once I have applied the patch, I save it away as I might need to refer to it again, and I move on to the next one.

This sounds like a lot of different steps, but I can process a lot of these relatively quickly. The patch review step is the slowest one here, as that of course can not be automated.

I later take those new patches that have been applied and run kernel build tests and other things before sending out emails saying they have been applied to the tree. But like with development patches, that happens outside of my email client workflow.

Bonus, sending email from the command line

In writing this up, I remembered that I do have some scripts that use mutt to send email out. I don’t normally use mutt for this for patch reviews, as I use other scripts for that (ones that eventually got turned into git send-email), so it’s not a hard requirement, but it is nice to be able to do a simple:

mutt -s "${subject}" "${address}" <  ${msg} >> error.log 2>&1

from within a script when needed.

Thunderbird also can do this, I have used:

thunderbird --compose "to='${address}',subject='${subject}',message=${msg}"

at times in the past when dealing with email servers that mutt can not connect to easily (i.e. gmail when using oauth tokens).

Summary of what I need from an email client

So, to summarize it all for Drew, here’s my list of requirements for me to be able to use an email client for kernel maintainership roles:

  • work with local mbox and maildir folders easily
  • open huge mbox and maildir folders quickly.
  • custom key bindings for any command. Defaults that are sane is always good, but everyone is used to a previous program and training fingers can be hard.
  • create new key bindings for common tasks (like save a message to a specific mbox)
  • easily filter messages based on various things. Full regexes are not needed, see the PATTERNS section of ‘man muttrc’ for examples of what people have come up with over the years as being needed by an email client.
  • when sending/responding to an email, bring it up in the editor of my choice, with full headers. I know aerc already uses vim for this, which is great as that makes it easy to send patches or include other files directly in an email body
  • edit a message directly from the email client and then save it back to the local mbox it came from
  • pipe the current message to an external program

That’s what I use for kernel development.

Oh, I forgot:

  • handle gpg encrypted email. Some mailing lists I am on send everything encrypted with a mailing list key which is needed to both decrypt the message and to encrypt messages sent to the list. SMIME could be used if GPG can’t work, but both of them are probably equally horrid to code support for, so I recommend GPG as that’s probably used more often.

Bonus things that I have grown to rely on when using mutt is:

  • handle displaying html email by piping to an external tool like w3m
  • send a message from the command line with a specific subject and a specific attchment if needed.
  • specify the configuration file to use as a command line option. It is usually easier to have one configuration file for a “work” account, and another one for a “personal” one, with different email servers and settings provided for both.
  • configuration files that can include other configuration files. Mutt allows me to keep all of my “core” keybindings in one config file and just specific email server options in separate config files, allowing me to make configuration management easier.

If you have made it this far, and you aren’t writing an email client, that’s amazing, it must be a slow news day, which is a good thing. I hope this writeup helps others to show them how mutt can be used to handle developer workflows easily, and what is required of a good email client in order to be able to do all of this.

Hopefully other email clients can get to state where they too can do all of this. Competition is good and maybe aerc can get there someday.

August 14, 2019 12:37 PM

August 11, 2019

Sri Ramkrishna

West Coast Hackfest – Summary

Sorry this was supposed to have gone out some weeks ago and I lazed it up. Blame it on my general resistance to blogging. :-)

This year, I helped organize West Coast Hackfest with my stalwart partner and friend Teresa Hill in Portland – with assistance from Kristi Progi. Big thanks to them for helping to make this a success!

Primarily the engagement hackfest was focused on the website content. The website is showing its age and needs both a content update and a facelift. Given our general focus on engagement, we want to re-envision the website to drive that engagement as a medium for volunteer capture, identity, and fundraising.

The three days of engagement hackfest was spent going through each of the various pages and pointing out issues in the content and what should be fixed. Fixing them is a little bit problematic as the content is not generally available on WordPress but embedded in the theme of which few people have access to. Another focus will be opening up that content and finding alternatives to create content without having to touch the theme at all.

Our observations going through them are as follow:

  • Our website doesn’t actually identify what we are as a project and what we work on. (eg the word desktop doesn’t show up anywhere on our website)
  • There is no emotional connection for newcomers who want to know what GNOME is, what our values are
  • We have old photos from early 6-7 years ago that need to be updated.
  • The messaging that we have developed within the engagement team is not reflected on the website and should be updated accordingly
  • We have items on our technologies that are no longer maintained like Telepathy
  • We have new items on our technology page that need to be added
  • We have outdated links to social media (eg G+ should no longer exist)

Our tour of the website has shown how out of date our website has and it is clear that it is not part of the engagement process. One of the things we will talk about in GUADEC is managing content and visuals on the website as part of the engagement team activity. We have an opportunity to really find new ways to connect with our users, volunteers, and donors and reach out to potential new folks through the philanthropy and activism in Free Software that we do.

I would like to thank the GNOME Foundation for providing the resources and infrastructure to have us all here.

The plans for West Coast Hackfest is to continue to expand its participation in the U.S. As a U.S. based non-profit, we have a responsibility to expand our mission in the United States as part of our Foundation activities. While we have been quite modest this year, we hope to expand even larger for next year as another vehicle like GUADEC as a meeting place for users, maintainers, designers, documentators and everyone else.

If you are interested in hosting West Coast Hackfest – (we’ll call it something else – suggestions?) then please get in touch with Kristi Progi and myself. We will love to hear from you!

by sri at August 11, 2019 11:06 PM

June 15, 2019

Greg KH

Linux stable tree mirror at github

As everyone seems to like to put kernel trees up on github for random projects (based on the crazy notifications I get all the time), I figured it was time to put up a semi-official mirror of all of the stable kernel releases on github.com

It can be found at: https://github.com/gregkh/linux and I will try to keep it up to date with the real source of all kernel stable releases at https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/

It differs from Linus’s tree at: https://github.com/torvalds/linux in that it contains all of the different stable tree branches and stable releases and tags, which many devices end up building on top of.

So, mirror away!

Also note, this is a read-only mirror, any pull requests created on it will be gleefully ignored, just like happens on Linus’s github mirror.

If people think this is needed on any other git hosting site, just let me know and I will be glad to push to other places as well.

This notification was also cross-posted on the new http://people.kernel.org/ site, go follow that for more kernel developer stuff.

June 15, 2019 08:10 PM

April 06, 2009

Darrick Wong

September 03, 2009

Valerie Aurora

Carbon METRIC BUTTLOAD print

I just read Charlie Stross's rant on reducing his household's carbon footprint. Summary: He and his wife can live a life of monastic discomfort, wearing moldy scratchy 10-year-old bamboo fiber jumpsuits and shivering in their flat - or, they can cut out one transatlantic flight per year and achieve the equivalent carbon footprint reduction.

I did a similar analysis back around 2007 or so and had the same result: I've got a relatively trim carbon footprint compared to your average first-worlder, except for the air travel that turns it into a bloated planet-eating monster too extreme to fall under the delicate term "footprint." Like Charlie, I am too practical, too technophilic, and too hopeful to accept that the only hope of saving the planet is to regress to third world living standards (fucking eco-ascetics!). I decided that I would only make changes that made my life better, not worse - e.g., living in a walkable urban center (downtown Portland, now SF). But the air travel was a stumper. I liked traveling, and flying around the world for conferences is a vital component of saving the world through open source. Isn't it? Isn't it?

Two things happened that made me re-evaluate my air travel philosophy. One, I started a file systems consulting business and didn't have a lot of spare cash to spend on fripperies. Two, I hurt my back and sitting became massively uncomfortable (still recovering from that one). So I cut down on the flying around the world to Linux conferences involuntarily.

You know what I discovered? I LOVE not flying around the world for Linux conferences. I love taking only a few flights a year. I love flying mostly in the same time zone (yay, West coast). I love having the energy to travel for fun because I'm not all dragged out by the conference circuit. I love hanging out with my friends who live in the same city instead of missing out on all the parties because I'm in fucking Venezuela instead.

Save the planet. Burn your frequent flyer card.

September 03, 2009 07:04 AM

March 04, 2013

Twitter

March 01, 2013

Twitter

February 18, 2009

Stephen Hemminger

Parallelizing netfilter

The Linux networking receive performance has been mostly single threaded until the advent of MSI-X and multiqueue receive hardware. Now with many cards, it is possible to be processing packets on multiple CPU's and cores at once. All this is great, and improves performance for the simple case.

But most users don't just use simple networking. They use useful features like netfilter to do firewalling, NAT, connection tracking and all other forms of wierd and wonderful things. The netfilter code has been tuned over the years, but there are still several hot locks in the receive path. Most of these are reader-writer locks which are actually the worst kind, much worse than a simple spin lock. The problem with locks on modern CPU's is that even for the uncontested case, a lock operation means a full-stop cache miss.

With the help of Eric Duzmet, Rick Jones, Martin Josefsson and others, it looks like there is a solution to most of these. I am excited to see how it all pans out but it could mean a big performance increase for any kind of netfilter packet intensive processing. Stay tuned.

by Linux Network Plumber (noreply@blogger.com) at February 18, 2009 05:51 AM

September 25, 2010

Andy Grover

Plumbers Down Under

<p>Since the original <a href="http://www.linuxplumbersconf.org/">Linux Plumbers Conference</a> drew much inspiration from <a href="http://lca2011.linux.org.au/">LCA</a>'s continuing success, it's cool to see some of what Plumbers has done be seen as <a href="http://airlied.livejournal.com/73491.html">worthy of emulating at next year's LCA</a>!</p><p>LCA seems like a great opportunity to specifically try to make progress on cross-project issues. It's quite well-attended so it's likely the people you need in the room to make a decision will be <em>in the room</em>.</p>

by andy.grover at September 25, 2010 01:50 PM

September 10, 2010

Andy Grover

Increasing office presence for remote workers

<p>I work from home. My basement, actually. I recently read an article in the Times about <a href="http://www.nytimes.com/2010/09/05/science/05robots.html?_r=1&amp;pagewanted=1">increasing the office presence of remote employees with robots</a>. Pretty interesting. How much does one of those robo-Beltzners cost? $5k? This is a neat idea but it's still not released so who knows.<br /><br />I've been thinking about other options for establishing a stronger office presence for myself. Recently I bought a webcam. If I used this to broadcast me, sitting at my desk on Ustream or Livestream, that would certainly make it so my coworkers (and the rest of the world) could see what I was up to, every second of the workday. This is actually a lot <i>more</i> exposure than an office worker, even in a cubicle, would expect. If I'm in an office cube, I might have people stop by, but I'll know they're there, and they won't <i>always</i> be there.&nbsp; There is still generally solitude and privacy to concentrate on the code and be productive. I'm currently trying something that I think is closer to the balance of a real office:<br /><ul><li>Take snapshots from webcam every 15 minutes<br /></li><li>Only during normal working hours</li><li>Give 3 second audible warning before capturing</li><li>Upload to an intranet webserver</li></ul>I haven't found this to be too much of an imposition -- in fact, the quarter-hourly beeps are somewhat like a clock chime.<br /><br />In the beginning, it's hard to resist mugging for the camera, but that passes:<br /><img style="max-width: 800px;" src="http://oss.oracle.com/%7Eagrover/pics/blog/whassup.jpg" alt="whassup???" height="240" width="320" /><br />Think about how this is better than irc or IM, both of which <i>do</i> have activity/presence indicators, but which either aren't used, or poorly implemented and often wrong. How much more likely are you, as a colleague of mine, to IM, email, video chat, or call me if you can see I'm at my desk and working? No more "around?" messages needed. You could even see if I'm looking cheerful, or perhaps otherwise indisposed, heh heh:<br /><img style="max-width: 800px;" src="http://oss.oracle.com/%7Eagrover/pics/blog/cat1.jpg" alt="hello kitty" height="240" width="320" /><br />On a technical note, although there were many Debian packages that kind-of did what I wanted, it turned out to be surprisingly easy to roll my own in about <a href="http://github.com/agrover/pysnapper/blob/master/webcam.py">20 lines of Python</a>.<br /><img style="max-width: 800px;" src="http://oss.oracle.com/%7Eagrover/pics/blog/working.jpg" alt="working hard." height="240" width="320" /><br />Anyways, just something I've been playing around with, while I wait for my robo-avatar to be set up down at HQ...</p>

by andy.grover at September 10, 2010 05:20 PM

November 08, 2009

Valerie Aurora

Migrated to WordPress

My LiveJournal blog name - valhenson - was the last major holdover from my old name, Val Henson. I got a new Social Security card, passport, and driver's license with my new name several months ago, but migrating my blog? That's hard! Or something. I finally got around to moving to a brand-spanking-new blog at WordPress:

Valerie Aurora's blog

Update your RSS reader with the above if you still want to read my blog - I won't be republishing my posts to my new blog on this LiveJournal blog.

If you're aware of any other current instances of "Val Henson" or "Valerie Henson," let me know! I obviously can't change my name on historical documents, like research papers or interviews, but if it's vaguely real-time-ish, I'd like to update it.

One web page I'm going to keep as Val Henson for historical reasons is my Val Henson is a Man joke. Several of the pages on my web site were created after the fact as vehicles for amusing pictures or graphics I had lying around. In this case, my friend Dana Sibera created a pretty damn cool picture of me with a full beard and I had to do something with it.



It's doubly wild now that I have such short hair.

November 08, 2009 11:36 PM