Planet Linux Plumbers Conf

June 03, 2009

Darrick Wong

Picspam!

/me rounded up a bunch of (old) panoramas and put them into the high-definition panorama viewer. Be sure to check out the (huge spike in memory cache when you load the) panorama previewer (click the "See All" button).

June 03, 2009 02:27 AM

August 27, 2008

Stephen Hemminger

Exploring transactional filesystems

In order to implement router style semantics, Vyatta allows setting many different configuration variables and then applying them all at once with a commit command. Currently, this is implemented by a combination of shell magic and unionfs. The problem is that keeping unionfs up to date and fixing the resulting crashes is major pain.

There must be better alternatives, current options include:
  • Replace unionfs with aufs which has less users yelling at it and more developers.
  • Use a filesystem like btrfs which has snapshots. This changes the model and makes api's like "what changed?" hard to implement.
  • Move to a pure userspace model using git. The problem here is that git as currently written is meant for users not transactions.
  • Use combination of copy, bind mount, and rsync.
  • Use a database for configuration. This is easier for general queries but is the most work. Conversion from existing format would be a pain.
Looks like a fun/hard problem. Don't expect any resolution soon.

by Linux Network Plumber (noreply@blogger.com) at August 27, 2008 10:20 PM

December 09, 2018

Paul E. McKenney

Parallel Programming: December 2018 Update

This weekend features a new release of Is Parallel Programming Hard, And, If So, What Can You Do About It?.

This release features Makefile-automated running of litmus tests (both with herd and litmus tools), catch-ups with recent Linux-kernel changes, a great many consistent-style changes (including a new style-guide appendix), improved code cross-referencing, and a great many proofreading changes, all courtesy of Akira Yokosawa. SeongJae Park, Imre Palik, Junchang Wang, and Nicholas Krause also contributed much-appreciated improvements and fixes. This release also features numerous epigraphs, modernization of sample code, many random updates, and larger updates to the memory-ordering chapter, with much help from my LKMM partners in crime, whose names are now enshrined in the LKMM section of the Linux-kernel MAINTAINERS file.

As always, git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/perfbook.git will be updated in real time.

Oh, and the first edition is now available on Amazon in English as well as Chinese. I have no idea how this came about, but there it is!

by paulmck at December 09, 2018 07:42 PM

November 04, 2018

Paul E. McKenney

Book review: "Skin in the Game: Hidden Asymmetries in Daily Life"

Antifragile” was the last volume in Nassim Taleb's Incerto series, but it has lost that distinction with the publication of “Skin in the Game: Hidden Asymmetries in Daily Life”. This book covers a great many topics, but I will focus on only a few that relate most closely to my area of expertise.

Chapter 2 is titled “The Most Intolerant Wins: The Dominance of a Stubborn Minority”. Examples include kosher and halal food, the English language (I plead guilty!!!), and many others besides. In all cases, if the majority is not overly inconvenienced by the strongly expressed needs or desires of the minority, the minority's preferences will prevail. On the one hand, I have no problem eating either kosher or halal food, so would be part of the compliant majority in that case. On the other hand, although I know bits and pieces of several languages, the only one I am fluent in is English, and I have attended gatherings where the language was English solely for my benefit. But there are limits. For example, if I were to attend a gathering in certain parts of (say) rural India or China, English might not be within the realm of possibility.

But what does this have to do with parallel programming???

This same stubborn-minority dominance appears in software, including RCU. Very few machines have more than a few tens of CPUs, but RCU is designed to accommodate thousands. Very few systems run workloads featuring aggressive real-time requirements, but RCU is designed to support low latencies (and even more so the variant of RCU present in the -rt patchset). Very few systems allow physical removal of CPUs while the systems is running, but RCU is designed to support that as well. Of course, as with human stubborn minorities, there are limits. RCU handles systems with a few thousand CPUs, but probably would not do all that well on a system with a few million CPUs. RCU supports deep sub-millisecond real-time latencies, but not sub-microsecond latencies. RCU supports controlled removal and insertion of CPUs, but not surprise removal or insertion.

Chapter 6 is titled Intellectual Yet Idiot (with the entertaining subtext “Teach a professor how to deadlift”), and, as might be expected from the title, takes a fair number of respected intellectual to task, for but two examples, Cass Sunstein and Richard Thaler. I did find the style of this chapter a bit off-putting, but I happened to read Michael Lewis's “The Undoing Project” at about the same time. This informative and entertaining book covers the work of Daniel Kahneman and Amos Tversky (whose work helped to inform that of Sunstein and Thaler), but I found the loss-aversion experiments to be unsettling. After all, what does losing (say) $100 really mean? That I will be sad for a bit? That I won't be able to buy that new book I was looking forward to reading? That I don't get to eat dinner tonight? That I go hungry for a week? That I starve to death? I just might give a very different answer in these different scenarios, mightn't I?

This topic is also covered by Jared Diamond in his most excellent book entitled “The World Until Yesterday”. In the “Scatter your land” section, Diamond discusses how traditional farmers plant multiple small and widely separated plots of land. This practice puzzled anthropologists for some time, as it does the opposite of optimize yields and minimize effort. Someone eventually figured out that because these traditional farmers had no way to preserve food and limited opportunities to trade it, there was no value in producing more food than they could consume. But there was value in avoiding a year in which there was no food, and farming different crops in widely separated locations greatly decreased the odds that all their crops in all their plots would fail, thus in turn minimizing the probability of starvation. In short, these farmers were not optimizing for maximum average production, but rather for maximum probability of survival.

And this tradeoff is central to most of Taleb's work to date, including “Skin in the Game”.

But what does this have to do with parallel programming???

Quite a bit, as it turns out. In theory, RCU should just run its state machine and be happy. In practice, there are all kinds of things that can stall its state machine, ranging from indefinitely preempted readers to long-running kernel threads refusing to give up the CPU to who knows what all else. RCU therefore contains numerous forward-progress checks that reduce performance slightly but which also allow RCU to continue working when the going gets rough. This sort of thing is baked even more deeply into the physical engineering disciplines in the form of the fabled engineering factor of safety. For example, a bridge might be designed to handle three times the heaviest conceivable load, thus perhaps surviving a black-swan event such as a larger-than-expected earthquake or tidal wave.

Returning to Skin in the Game, Taleb makes much of the increased quality of decisions when the decider is directly affected by them, and rightly so. However, I became uneasy about cases where the decision and effect are widely separated in time. Taleb does touch obliquely on this topic in a section entitled “How to Put Skin in the Game of Suicide Bombers”, but does not address this topic in more prosaic settings. One could take a survival-based approach, arguing that tomorrow matters not unless you survive today, but in the absence of a very big black swan, a large fraction of the people alive today will still be alive ten years from now.

But what does this have to do with parallel programming???

There is a rather interesting connection, especially when you consider that Linux-kernel RCU's useful lifespan probably exceeds my own. This is not a new thought, and is in fact why I have put so much energy into speaking and writing about RCU. I also try my best to make RCU able to stand up to whatever comes its way, with varying degrees of success over the years.

However, beyond a certain point, this practice is labeled “overengineering”, which is looked down upon within the Linux kernel community. And with good reason: Many of the troubles one might foresee will never happen, and so the extra complexity added to deal with those troubles will provide nothing but headaches for no benefit. In short, my best strategy is to help make sure that there are bright, capable, and motivated people to look after RCU after I am gone. I therefore intend to continue writing and speaking about RCU. :–)

by paulmck at November 04, 2018 03:54 AM

August 24, 2018

Greg KH

What stable kernel should I use?

I get a lot of questions about people asking me about what stable kernel should they be using for their product/device/laptop/server/etc. all the time. Especially given the now-extended length of time that some kernels are being supported by me and others, this isn’t always a very obvious thing to determine. So this post is an attempt to write down my opinions on the matter. Of course, you are free to use what ever kernel version you want, but here’s what I recommend.

As always, the opinions written here are my own, I speak for no one but myself.

What kernel to pick

Here’s the my short list of what kernel you should use, raked from best to worst options. I’ll go into the details of all of these below, but if you just want the summary of all of this, here it is:

Hierarchy of what kernel to use, from best solution to worst:

  • Supported kernel from your favorite Linux distribution
  • Latest stable release
  • Latest LTS release
  • Older LTS release that is still being maintained

What kernel to never use:

  • Unmaintained kernel release

To give numbers to the above, today, as of August 24, 2018, the front page of kernel.org looks like this:

So, based on the above list that would mean that:

  • 4.18.5 is the latest stable release
  • 4.14.67 is the latest LTS release
  • 4.9.124, 4.4.152, and 3.16.57 are the older LTS releases that are still being maintained
  • 4.17.19 and 3.18.119 are “End of Life” kernels that have had a release in the past 60 days, and as such stick around on the kernel.org site for those who still might want to use them.

Quite easy, right?

Ok, now for some justification for all of this:

Distribution kernels

The best solution for almost all Linux users is to just use the kernel from your favorite Linux distribution. Personally, I prefer the community based Linux distributions that constantly roll along with the latest updated kernel and it is supported by that developer community. Distributions in this category are Fedora, openSUSE, Arch, Gentoo, CoreOS, and others.

All of these distributions use the latest stable upstream kernel release and make sure that any needed bugfixes are applied on a regular basis. That is the one of the most solid and best kernel that you can use when it comes to having the latest fixes (remember all fixes are security fixes) in it.

There are some community distributions that take a bit longer to move to a new kernel release, but eventually get there and support the kernel they currently have quite well. Those are also great to use, and examples of these are Debian and Ubuntu.

Just because I did not list your favorite distro here does not mean its kernel is not good. Look on the web site for the distro and make sure that the kernel package is constantly updated with the latest security patches, and all should be well.

Lots of people seem to like the old, “traditional” model of a distribution and use RHEL, SLES, CentOS or the “LTS” Ubuntu release. Those distros pick a specific kernel version and then camp out on it for years, if not decades. They do loads of work backporting the latest bugfixes and sometimes new features to these kernels, all in a Quixote quest to keep the version number from never being changed, despite having many thousands of changes on top of that older kernel version. This work is a truly thankless job, and the developers assigned to these tasks do some wonderful work in order to achieve these goals. If you like never seeing your kernel version number change, then use these distributions. They usually cost some money to use, but the support you get from these companies is worth it when something goes wrong.

So again, the best kernel you can use is one that someone else supports, and you can turn to for help. Use that support, usually you are already paying for it (for the enterprise distributions), and those companies know what they are doing.

But, if you do not want to trust someone else to manage your kernel for you, or you have hardware that a distribution does not support, then you want to run the Latest stable release:

Latest stable release

This kernel is the latest one from the Linux kernel developer community that they declare as “stable”. About every three months, the community releases a new stable kernel that contains all of the newest hardware support, the latest performance improvements, as well as the latest bugfixes for all parts of the kernel. Over the next 3 months, bugfixes that go into the next kernel release to be made are backported into this stable release, so that any users of this kernel are sure to get them as soon as possible.

This is usually the kernel that most community distributions use as well, so you can be sure it is tested and has a large audience of users. Also, the kernel community (all 4000+ developers) are willing to help support users of this release, as it is the latest one that they made.

After 3 months, a new kernel is released and you should move to it to ensure that you stay up to date, as support for this kernel is usually dropped a few weeks after the newer release happens.

If you have new hardware that is purchased after the last LTS release came out, you almost are guaranteed to have to run this kernel in order to have it supported. So for desktops or new servers, this is usually the recommended kernel to be running.

Latest LTS release

If your hardware relies on a vendors out-of-tree patch in order to make it work properly (like almost all embedded devices these days), then the next best kernel to be using is the latest LTS release. That release gets all of the latest kernel fixes that goes into the stable releases where applicable, and lots of users test and use it.

Note, no new features and almost no new hardware support is ever added to these kernels, so if you need to use a new device, it is better to use the latest stable release, not this release.

Also this release is common for users that do not like to worry about “major” upgrades happening on them every 3 months. So they stick to this release and upgrade every year instead, which is a fine practice to follow.

The downsides of using this release is that you do not get the performance improvements that happen in newer kernels, except when you update to the next LTS kernel, potentially a year in the future. That could be significant for some workloads, so be very aware of this.

Also, if you have problems with this kernel release, the first thing that any developer whom you report the issue to is going to ask you to do is, “does the latest stable release have this problem?” So you will need to be aware that support might not be as easy to get as with the latest stable releases.

Now if you are stuck with a large patchset and can not update to a new LTS kernel once a year, perhaps you want the older LTS releases:

Older LTS release

These releases have traditionally been supported by the community for 2 years, sometimes longer for when a major distribution relies on this (like Debian or SLES). However in the past year, thanks to a lot of suport and investment in testing and infrastructure from Google, Linaro, Linaro member companies, kernelci.org, and others, these kernels are starting to be supported for much longer.

Here’s the latest LTS releases and how long they will be supported for, as shown at kernel.org/category/releases.html on August 24, 2018:

The reason that Google and other companies want to have these kernels live longer is due to the crazy (some will say broken) development model of almost all SoC chips these days. Those devices start their development lifecycle a few years before the chip is released, however that code is never merged upstream, resulting in a brand new chip being released based on a 2 year old kernel. These SoC trees usually have over 2 million lines added to them, making them something that I have started calling “Linux-like” kernels.

If the LTS releases stop happening after 2 years, then support from the community instantly stops, and no one ends up doing bugfixes for them. This results in millions of very insecure devices floating around in the world, not something that is good for any ecosystem.

Because of this dependency, these companies now require new devices to constantly update to the latest LTS releases as they happen for their specific release version (i.e. every 4.9.y release that happens). An example of this is the Android kernel requirements for new devices shipping for the “O” and now “P” releases specified the minimum kernel version allowed, and Android security releases might start to require those “.y” releases to happen more frequently on devices.

I will note that some manufacturers are already doing this today. Sony is one great example of this, updating to the latest 4.4.y release on many of their new phones for their quarterly security release. Another good example is the small company Essential which has been tracking the 4.4.y releases faster than anyone that I know of.

There is one huge caveat when using a kernel like this. The number of security fixes that get backported are not as great as with the latest LTS release, because the traditional model of the devices that use these older LTS kernels is a much more reduced user model. These kernels are not to be used in any type of “general computing” model where you have untrusted users or virtual machines, as the ability to do some of the recent Spectre-type fixes for older releases is greatly reduced, if present at all in some branches.

So again, only use older LTS releases in a device that you fully control, or lock down with a very strong security model (like Android enforces using SELinux and application isolation). Never use these releases on a server with untrusted users, programs, or virtual machines.

Also, support from the community for these older LTS releases is greatly reduced even from the normal LTS releases, if available at all. If you use these kernels, you really are on your own, and need to be able to support the kernel yourself, or rely on you SoC vendor to provide that support for you (note that almost none of them do provide that support, so beware…)

Unmaintained kernel release

Surprisingly, many companies do just grab a random kernel release, slap it into their product and proceed to ship it in hundreds of thousands of units without a second thought. One crazy example of this would be the Lego Mindstorm systems that shipped a random -rc release of a kernel in their device for some unknown reason. A -rc release is a development release that not even the Linux kernel developers feel is ready for everyone to use just yet, let alone millions of users.

You are of course free to do this if you want, but note that you really are on your own here. The community can not support you as no one is watching all kernel versions for specific issues, so you will have to rely on in-house support for everything that could go wrong. Which for some companies and systems, could be just fine, but be aware of the “hidden” cost this might cause if you do not plan for this up front.

Summary

So, here’s a short list of different types of devices, and what I would recommend for their kernels:

  • Laptop / Desktop: Latest stable release
  • Server: Latest stable release or latest LTS release
  • Embedded device: Latest LTS release or older LTS release if the security model used is very strong and tight.

And as for me, what do I run on my machines? My laptops run the latest development kernel (i.e. Linus’s development tree) plus whatever kernel changes I am currently working on and my servers run the latest stable release. So despite being in charge of the LTS releases, I don’t run them myself, except in testing systems. I rely on the development and latest stable releases to ensure that my machines are running the fastest and most secure releases that we know how to create at this point in time.

August 24, 2018 04:11 PM

July 15, 2018

Sri Ramkrishna

GUADEC 2018 Almeria – reflections

It’s been a couple of days since the BoF days ended.  I’ve been spending most of post-guadec traveling around Spain and enjoying myself immensely.  I leave to go back to the U.S. day after tomorrow so now seems like a perfect time to write up something about my experience this year.

Almeria was a grand time, as usual being able to connect with friends and acquaintances is a large part of what makes GUADEC special.  I found all the evening events to be spectacular and full of surprises.  The beach party was awesome, and the flamenco night was just spectacular.  I was really moved by the music and the dancing.  There was clearly a lot of different influences there.

The little skit with Nuritzi, and others was really a special surprise and they did seem to do a fairly decent job with barely any direction.  :)

Overall, it was a fabulous GUADEC.  I think several people made this observation, but it seemed that our community was in decline over the years thanks to the market’s focus on webapps.  Nobody seemingly is interested in desktops or help funding them.  But this year it seems we have a new generation of enthusiastic young people who are very much interested in moving the platform forward.  We’ve had more downstreams represented this year.  Next year, let’s try to increase that outreach by adding XFCE and Mate to the mix.  Only by diversity are we going to be able to improve our platform and the ongoing discussions with GNOME’s downstreams have been beneficial for everyone.

The other observation is that the engagement team, thanks to the move to GitLab is becoming more of a core function of GNOME.  There is definitely a feeling that our we present ourselves to the world is an important part of the project’s functions and of course we want to be able to help as much as we can to improve GNOME’s image to the outside world.  We shouldn’t be afraid to communicate even if there are forces out there that seek to bring us down.  We shouldn’t also be afraid to listen either.  So we’ll continue having an internal discussion about listening as well.

I was the winner of this year’s Pant’s award!  I’m so so grateful for all of you for the recognition knowing that the work I do is valuable.  It’s truly is a labor of love for me to help build relationships between GNOME and the community.  I would have given a few words at the time, but I probably would have gotten emotional so I demurred. :-)

BoF days were great, and we had some work done.  We’ve kicked off some more projects that hopefully get some attention during the year.  The engagement team is still double or triple booked on projects.  If you are interested in joining us we would love to have you.  We have all kinds of things that are generally non-technical but also fun!  So reach out through the comments or some other way.

I would like to personally thank the GUADEC organizing committee for all their hard work.  Things got started pretty late, but thanks to great dedication by the team were able to get things accomplished.  Bravo!  Also thank you to the KDE Hispano team who help run the registration booth.  Never a more friendly bunch of people!

At last, I want to make a final plea that the CfP for Libre Application Summit (hosted by GNOME) – LAS will be closing in about 10 days.  This isn’t a desktop conference, but a conference focused on applications and the eco-system.  Please spread the word as much as you can so that we can continue to expand this conference.  It will be our vehicle to talk about how we can start building a market for the Linux platform and the accomplishments of those involved in desktops, toolkits, and design.  http://las.gnome.org/  I’m working on adding more stakeholders like KDE to the conference so that the conference represents everyone.

My trip was funded by the GNOME Foundation and I’m grateful for them sponsoring me.

by sri at July 15, 2018 09:43 PM

July 04, 2018

Sri Ramkrishna

See you at GUADEC!

I’m currently writing this at Minneapolis airport. Having ramen and sushi, before boarding my flight to CDG and ultimately to Malaga and Almeria. GUADEC is always the most special time being able to meet absent friends, and of course the scheming, the plotting and rabble rousing and that’s just the things I’m doing! :-)

I’m presenting this year with a community talk. Which the slides are being written on the flight . This will be a busy GUADEC as there are a lot of things going on, projects that I’m looking to complete, a conference that I need to promote, and a lot of conversations about all things engagement, website, and various other things. There might even be a special blog post later! ;)

See ya there!  I would like to thank the GNOME Foundation of which I would not have been able to attend.

by sri at July 04, 2018 07:56 PM

March 11, 2018

Greg KH

My affidavit in the Geniatech vs. McHardy case

As many people know, last week there was a court hearing in the Geniatech vs. McHardy case. This was a case brought claiming a license violation of the Linux kernel in Geniatech devices in the German court of OLG Cologne.

Harald Welte has written up a wonderful summary of the hearing, I strongly recommend that everyone go read that first.

In Harald’s summary, he refers to an affidavit that I provided to the court. Because the case was withdrawn by McHardy, my affidavit was not entered into the public record. I had always assumed that my affidavit would be made public, and since I have had a number of people ask me about what it contained, I figured it was good to just publish it for everyone to be able to see it.

There are some minor edits from what was exactly submitted to the court such as the side-by-side German translation of the English text, and some reformatting around some footnotes in the text, because I don’t know how to do that directly here, and they really were not all that relevant for anyone who reads this blog. Exhibit A is also not reproduced as it’s just a huge list of all of the kernel releases in which I felt that were no evidence of any contribution by Patrick McHardy.

AFFIDAVIT

I, the undersigned, Greg Kroah-Hartman,
declare in lieu of an oath and in the
knowledge that a wrong declaration in
lieu of an oath is punishable, to be
submitted before the Court:

I. With regard to me personally:

1. I have been an active contributor to
   the Linux Kernel since 1999.

2. Since February 1, 2012 I have been a
   Linux Foundation Fellow.  I am currently
   one of five Linux Foundation Fellows
   devoted to full time maintenance and
   advancement of Linux. In particular, I am
   the current Linux stable Kernel maintainer
   and manage the stable Kernel releases. I
   am also the maintainer for a variety of
   different subsystems that include USB,
   staging, driver core, tty, and sysfs,
   among others.

3. I have been a member of the Linux
   Technical Advisory Board since 2005.

4. I have authored two books on Linux Kernel
   development including Linux Kernel in a
   Nutshell (2006) and Linux Device Drivers
   (co-authored Third Edition in 2009.)

5. I have been a contributing editor to Linux
   Journal from 2003 - 2006.

6. I am a co-author of every Linux Kernel
   Development Report. The first report was
   based on my Ottawa Linux Symposium keynote
   in 2006, and the report has been published
   every few years since then. I have been
   one of the co-author on all of them. This
   report includes a periodic in-depth
   analysis of who is currently contributing
   to Linux. Because of this work, I have an
   in-depth knowledge of the various records
   of contributions that have been maintained
   over the course of the Linux Kernel
   project.

   For many years, Linus Torvalds compiled a
   list of contributors to the Linux kernel
   with each release. There are also usenet
   and email records of contributions made
   prior to 2005. In April of 2005, Linus
   Torvalds created a program now known as
   “Git” which is a version control system
   for tracking changes in computer files and
   coordinating work on those files among
   multiple people. Every Git directory on
   every computer contains an accurate
   repository with complete history and full
   version tracking abilities.  Every Git
   directory captures the identity of
   contributors.  Development of the Linux
   kernel has been tracked and managed using
   Git since April of 2005.

   One of the findings in the report is that
   since the 2.6.11 release in 2005, a total
   of 15,637 developers have contributed to
   the Linux Kernel.

7. I have been an advisor on the Cregit
   project and compared its results to other
   methods that have been used to identify
   contributors and contributions to the
   Linux Kernel, such as a tool known as “git
   blame” that is used by developers to
   identify contributions to a git repository
   such as the repositories used by the Linux
   Kernel project.

8. I have been shown documents related to
   court actions by Patrick McHardy to
   enforce copyright claims regarding the
   Linux Kernel. I have heard many people
   familiar with the court actions discuss
   the cases and the threats of injunction
   McHardy leverages to obtain financial
   settlements. I have not otherwise been
   involved in any of the previous court
   actions.

II. With regard to the facts:

1. The Linux Kernel project started in 1991
   with a release of code authored entirely
   by Linus Torvalds (who is also currently a
   Linux Foundation Fellow).  Since that time
   there have been a variety of ways in which
   contributions and contributors to the
   Linux Kernel have been tracked and
   identified. I am familiar with these
   records.

2. The first record of any contribution
   explicitly attributed to Patrick McHardy
   to the Linux kernel is April 23, 2002.
   McHardy’s last contribution to the Linux
   Kernel was made on November 24, 2015.

3. The Linux Kernel 2.5.12 was released by
   Linus Torvalds on April 30, 2002.

4. After review of the relevant records, I
   conclude that there is no evidence in the
   records that the Kernel community relies
   upon to identify contributions and
   contributors that Patrick McHardy made any
   code contributions to versions of the
   Linux Kernel earlier than 2.4.18 and
   2.5.12. Attached as Exhibit A is a list of
   Kernel releases which have no evidence in
   the relevant records of any contribution
   by Patrick McHardy.

March 11, 2018 01:51 AM

April 06, 2009

Darrick Wong

September 03, 2009

Valerie Aurora

Carbon METRIC BUTTLOAD print

I just read Charlie Stross's rant on reducing his household's carbon footprint. Summary: He and his wife can live a life of monastic discomfort, wearing moldy scratchy 10-year-old bamboo fiber jumpsuits and shivering in their flat - or, they can cut out one transatlantic flight per year and achieve the equivalent carbon footprint reduction.

I did a similar analysis back around 2007 or so and had the same result: I've got a relatively trim carbon footprint compared to your average first-worlder, except for the air travel that turns it into a bloated planet-eating monster too extreme to fall under the delicate term "footprint." Like Charlie, I am too practical, too technophilic, and too hopeful to accept that the only hope of saving the planet is to regress to third world living standards (fucking eco-ascetics!). I decided that I would only make changes that made my life better, not worse - e.g., living in a walkable urban center (downtown Portland, now SF). But the air travel was a stumper. I liked traveling, and flying around the world for conferences is a vital component of saving the world through open source. Isn't it? Isn't it?

Two things happened that made me re-evaluate my air travel philosophy. One, I started a file systems consulting business and didn't have a lot of spare cash to spend on fripperies. Two, I hurt my back and sitting became massively uncomfortable (still recovering from that one). So I cut down on the flying around the world to Linux conferences involuntarily.

You know what I discovered? I LOVE not flying around the world for Linux conferences. I love taking only a few flights a year. I love flying mostly in the same time zone (yay, West coast). I love having the energy to travel for fun because I'm not all dragged out by the conference circuit. I love hanging out with my friends who live in the same city instead of missing out on all the parties because I'm in fucking Venezuela instead.

Save the planet. Burn your frequent flyer card.

September 03, 2009 07:04 AM

March 04, 2013

Twitter

March 01, 2013

Twitter

February 18, 2009

Stephen Hemminger

Parallelizing netfilter

The Linux networking receive performance has been mostly single threaded until the advent of MSI-X and multiqueue receive hardware. Now with many cards, it is possible to be processing packets on multiple CPU's and cores at once. All this is great, and improves performance for the simple case.

But most users don't just use simple networking. They use useful features like netfilter to do firewalling, NAT, connection tracking and all other forms of wierd and wonderful things. The netfilter code has been tuned over the years, but there are still several hot locks in the receive path. Most of these are reader-writer locks which are actually the worst kind, much worse than a simple spin lock. The problem with locks on modern CPU's is that even for the uncontested case, a lock operation means a full-stop cache miss.

With the help of Eric Duzmet, Rick Jones, Martin Josefsson and others, it looks like there is a solution to most of these. I am excited to see how it all pans out but it could mean a big performance increase for any kind of netfilter packet intensive processing. Stay tuned.

by Linux Network Plumber (noreply@blogger.com) at February 18, 2009 05:51 AM

September 25, 2010

Andy Grover

Plumbers Down Under

<p>Since the original <a href="http://www.linuxplumbersconf.org/">Linux Plumbers Conference</a> drew much inspiration from <a href="http://lca2011.linux.org.au/">LCA</a>'s continuing success, it's cool to see some of what Plumbers has done be seen as <a href="http://airlied.livejournal.com/73491.html">worthy of emulating at next year's LCA</a>!</p><p>LCA seems like a great opportunity to specifically try to make progress on cross-project issues. It's quite well-attended so it's likely the people you need in the room to make a decision will be <em>in the room</em>.</p>

by andy.grover at September 25, 2010 01:50 PM

September 10, 2010

Andy Grover

Increasing office presence for remote workers

<p>I work from home. My basement, actually. I recently read an article in the Times about <a href="http://www.nytimes.com/2010/09/05/science/05robots.html?_r=1&amp;pagewanted=1">increasing the office presence of remote employees with robots</a>. Pretty interesting. How much does one of those robo-Beltzners cost? $5k? This is a neat idea but it's still not released so who knows.<br /><br />I've been thinking about other options for establishing a stronger office presence for myself. Recently I bought a webcam. If I used this to broadcast me, sitting at my desk on Ustream or Livestream, that would certainly make it so my coworkers (and the rest of the world) could see what I was up to, every second of the workday. This is actually a lot <i>more</i> exposure than an office worker, even in a cubicle, would expect. If I'm in an office cube, I might have people stop by, but I'll know they're there, and they won't <i>always</i> be there.&nbsp; There is still generally solitude and privacy to concentrate on the code and be productive. I'm currently trying something that I think is closer to the balance of a real office:<br /><ul><li>Take snapshots from webcam every 15 minutes<br /></li><li>Only during normal working hours</li><li>Give 3 second audible warning before capturing</li><li>Upload to an intranet webserver</li></ul>I haven't found this to be too much of an imposition -- in fact, the quarter-hourly beeps are somewhat like a clock chime.<br /><br />In the beginning, it's hard to resist mugging for the camera, but that passes:<br /><img style="max-width: 800px;" src="http://oss.oracle.com/%7Eagrover/pics/blog/whassup.jpg" alt="whassup???" height="240" width="320" /><br />Think about how this is better than irc or IM, both of which <i>do</i> have activity/presence indicators, but which either aren't used, or poorly implemented and often wrong. How much more likely are you, as a colleague of mine, to IM, email, video chat, or call me if you can see I'm at my desk and working? No more "around?" messages needed. You could even see if I'm looking cheerful, or perhaps otherwise indisposed, heh heh:<br /><img style="max-width: 800px;" src="http://oss.oracle.com/%7Eagrover/pics/blog/cat1.jpg" alt="hello kitty" height="240" width="320" /><br />On a technical note, although there were many Debian packages that kind-of did what I wanted, it turned out to be surprisingly easy to roll my own in about <a href="http://github.com/agrover/pysnapper/blob/master/webcam.py">20 lines of Python</a>.<br /><img style="max-width: 800px;" src="http://oss.oracle.com/%7Eagrover/pics/blog/working.jpg" alt="working hard." height="240" width="320" /><br />Anyways, just something I've been playing around with, while I wait for my robo-avatar to be set up down at HQ...</p>

by andy.grover at September 10, 2010 05:20 PM

November 08, 2009

Valerie Aurora

Migrated to WordPress

My LiveJournal blog name - valhenson - was the last major holdover from my old name, Val Henson. I got a new Social Security card, passport, and driver's license with my new name several months ago, but migrating my blog? That's hard! Or something. I finally got around to moving to a brand-spanking-new blog at WordPress:

Valerie Aurora's blog

Update your RSS reader with the above if you still want to read my blog - I won't be republishing my posts to my new blog on this LiveJournal blog.

If you're aware of any other current instances of "Val Henson" or "Valerie Henson," let me know! I obviously can't change my name on historical documents, like research papers or interviews, but if it's vaguely real-time-ish, I'd like to update it.

One web page I'm going to keep as Val Henson for historical reasons is my Val Henson is a Man joke. Several of the pages on my web site were created after the fact as vehicles for amusing pictures or graphics I had lying around. In this case, my friend Dana Sibera created a pretty damn cool picture of me with a full beard and I had to do something with it.



It's doubly wild now that I have such short hair.

November 08, 2009 11:36 PM