Planet Linux Plumbers Conf

April 06, 2009

Darrick Wong

August 27, 2008

Stephen Hemminger

Exploring transactional filesystems

In order to implement router style semantics, Vyatta allows setting many different configuration variables and then applying them all at once with a commit command. Currently, this is implemented by a combination of shell magic and unionfs. The problem is that keeping unionfs up to date and fixing the resulting crashes is major pain.

There must be better alternatives, current options include:
  • Replace unionfs with aufs which has less users yelling at it and more developers.
  • Use a filesystem like btrfs which has snapshots. This changes the model and makes api's like "what changed?" hard to implement.
  • Move to a pure userspace model using git. The problem here is that git as currently written is meant for users not transactions.
  • Use combination of copy, bind mount, and rsync.
  • Use a database for configuration. This is easier for general queries but is the most work. Conversion from existing format would be a pain.
Looks like a fun/hard problem. Don't expect any resolution soon.

by Linux Network Plumber (noreply@blogger.com) at August 27, 2008 10:20 PM

August 13, 2014

Paul E. McKenney

A practitioner at a formal-methods conference

I had the privilege of being asked to present on ordering, RCU, and validation at a joint meeting of the REORDER (Third International Workshop on Memory Consistency Models) and EC2 (7th International Workshop on Exploiting Concurrency Efficiently and Correctly) workshops.

Before my talk, Michael Tautschnig gave a presentation (based on this paper) on an interesting prototype tool (called “mole,” with the name chosen because the gestation period of a mole is about 42 days) that helps identify patterns of usage in large code bases. It is early days for this tool, but one could imagine it growing into something quite useful, perhaps answering questions such as “what are the different ways in which the Linux kernel uses reference counting?” He also took care to call out the many disadvantages of testing, which include not being able to test all paths, all possible races, all possible inputs, or all possible much of anything, at least not in finite time.

I started my talk with an overview of split counters, where each CPU (or task or whatever) updates its own counter, and the aggregate counter is read out by summing all the per-CPU counters. There was some concern expressed by members of the audience about the possibility of inconsistent results. For example, if one CPU adds five, another CPU adds seven, and a third CPU adds 11 to initially zero counter, then two CPUs reading out the counter might see 12 and 18, respectively, which are inconsistent (they differ by six, and no CPU added six). To their credit, the attendees picked right up on a reasonable correctness criterion. The idea is that the aggregate counter's value varies with time, and that any given reader will be guaranteed to return a value between that of the counter when the reader started and that of the counter when the reader ended: Consistency is neither needed nor provided in a great number of situations.

I then gave my usual introduction to RCU, and of course it is always quite a bit of fun introducing RCU to people who have never encountered anything like it. There was quite a bit of skepticism initially, as well as a lot of questions and comments.

I then turned to validation, noting the promise of some formal-validation tooling. I ended by saying that although I agreed with the limitations of testing called out by the previous speaker, the fact remains that a number of people have devised tests that had found RCU bugs (thank you, Stephen, Dave, and Fengguang!), but no one has yet devised a hard-core formal-validation tool that has found any bugs in RCU. I also pointed out that this is definitely not because there are no bugs in RCU! (Yes, I have gotten rid of all day-one bugs in RCU, but only by having also gotten rid of all day-one code in RCU.) When asked if I meant bugs in RCU usage or in RCU itself, I replied “Either would be good.” Several people wrote down where to find RCU in the Linux kernel, so it will be interesting to see what they come up with. (Perhaps all too interesting!)

There were several talks on analyzing weakly ordered systems, but keep in mind that for these guys, even x86 is weakly ordered. After all, it allows prior stores to be reordered with later loads.

Another interesting talk was given by Kapil Vaswani on the topic of wait freedom. Recall that in a wait-free algorithm, every process is guaranteed to make some progress in a finite time, even in the presence of arbitrarily long delays for any given process. In contrast, in a lock-free algorithm, only one process is guaranteed to make some progress in a finite time, again, even in the presence of arbitrarily long delays for any given process. It is worth noting that neither of these guarantees is sufficient for real-time programs, which require a specified amount of progress (not merely some progress) in a bounded amount of time (not merely a finite amount of time). Wait-freedom and lock-freedom are nevertheless important forward-progress guarantees, and there are numerous other similar guarantees including obstruction freedom, deadlock freedom, starvation freedom, many more besides.

It turns out that most software in production, even in real-time systems, is not wait-free, which has been a source of consternation for many researchers for quite some time. Kapil went on to describe how Alistarh et al. showed that, roughly speaking, given a non-hostile scheduler and crash-free execution, lock-free algorithms have wait-free behavior.

The interesting thing about this is that you can take it quite a bit farther, and those of you who know me well won't be surprised to learn that I did just that in a question to the speaker. If you have a non-hostile scheduler, crash-free execution, FIFO locks, bounded lock-hold times, no lock nesting, a finite number of processes, and so on, you can obtain the benefits of the more aggressive forward-progress guarantees. The general idea is that if you have at most N processes, and if the maximum lock-hold time is T, then you can wait at most (N-1)T time to acquire a given lock. (Those wishing greater rigor should read Bjoern B. Brandenburg's dissertation — Full disclosure: I was on Bjoern's committee.) In happy contrast to the authors of the paper mentioned in the previous paragraph, the speaker and audience seemed quite intrigued by this line of thought.

In all, it was an extremely interesting and thought-provoking time. With some luck, perhaps we will see some powerful software tools introduced by this group of researchers.

August 13, 2014 03:40 AM

June 30, 2014

Paul E. McKenney

Confessions of a Recovering Proprietary Programmer, Part XII

We have all suffered from changing requirements. We are almost done with implemention, maybe even most of the way done testing, and then a change in requirements invalidates all of our hard work. This can of course be extremely irritating.

It turns out that changing requirements are not specific to software, nor are they anything new. Here is an example from my father's area of expertise, which is custom-built large-scale food processing equipment.

My father's company was designing a new-to-them piece of equipment, namely a spinach chopper able to chop 15 tons of spinach per hour. Given that this was a new area, they built a small-scale prototype with which they ran a number of experiments on “small” batches of spinach (that is, less than a ton of spinach). The customer provided feedback on the results of these experiments, which fed into the design of the full-scale production model.

After the resulting spinach chopper had been in production for a short while, there was a routine follow-up inspection. The purpose of the inspection was to check for unexpected wear and other unanticipated problems with the design and fabrication. While the inspector was there, the chopper kicked a large rock into the air. It turned out that spinach from the field can contain rocks up to six inches (15cm) in diameter, and this requirement was not been communicated during development. This situation of course inspired a quick engineering change, installing a cage that captured the rocks.

Of course, it is only fair to point out that this requirement didn't actually change. After all, spinach from the field has always contained rocks up to six inches in diameter. There had been a failure to communicate an existing requirement, not a change in the actual requirement.

However, from the viewpoint of the engineers, the effect is the same. Regardless of whether there was an actual change in the requirements or the exposure of a previously hidden requirement, redesign and rework will very likely required.

One other point of interest to those who know me... The spinach chopper was re-inspected some five years after it started operation. Its blades had never been sharpened, and despite encountering the occasional rock, still did not need sharpening. So to those of you who occasionally accuse me of over-engineering, all I can say is that I come by it honestly! ;–)

June 30, 2014 10:36 PM

January 15, 2014

Greg KH

kdbus details

Now that linux.conf.au is over, there has been a bunch of information running around about the status of kdbus and the integration of it with systemd. So, here’s a short summary of what’s going on at the moment.

Lennart Poettering gave a talk about kdbus at linux.conf.au. The talk can be viewed here, and the slides are here. Go read the slides and watch the talk, odds are, most of your questions will be answered there already.

For those who don’t want to take the time watching the talk, lwn.net wrote up a great summary of the talk, and that article is here. For those of you without a lwn.net subscription, what are you waiting for? You’ll have to wait two weeks before it comes out from behind the paid section of the website before reading it, sorry.

There will be a systemd hack-fest a few days before FOSDEM, where we should hopefully pound out the remaining rough edges on the codebase and get it ready to be merged. Lennart will also be giving his kdbus talk again at FOSDEM if anyone wants to see it in person.

The kdbus code can be found in two places, both on google code, and on github, depending on where you like to browse things. In a few weeks we’ll probably be creating some patches and submitting it for inclusion in the main kernel, but more testing with the latest systemd code needs to be done first.

If you want more information about the kdbus interface, and how it works, please see the kdbus.txt file for details.

Binder vs. kdbus

A lot of people have asked about replacing Android’s binder code with kdbus. I originally thought this could be done, but as time has gone by, I’ve come to the conclusion that this will not happen with the first version of kdbus, and possibly can never happen.

First off, go read that link describing binder that I pointed to above, especially all of the links to different resources from that page. That should give you more than you ever wanted to know about binder.

Short answer

Binder is bound to the CPU, D-Bus (and hence kdbus), is bound to RAM.

Long answer

Binder

Binder is an interface that Android uses to provide synchronous calling (CPU) from one task to a thread of another task. There is no queueing involved in these calls, other than the caller process is suspended until the answering process returns. RAM is not interesting besides the fact that it is used to share the data between the different callers. The fact that the caller process gives up its CPU slice to the answering process is key for how Android works with the binder library.

This is just like a syscall, and it behaves a lot like a mutex. The communicating processes are directly connected to each other. There is an upper limit of how many different processes can be using binder at once, and I think it’s around 16 for most systems.

D-Bus

D-Bus is asynchronous, it queues (RAM) messages, keeps the messages in order, and the receiver dequeues the messages. The CPU does not matter at all other than it is used to do the asynchronous work of passing the RAM around between the different processes.

This is a lot like network communication protocols. It is a very “disconnected” communication method between processes. The upper limit of message sizes and numbers is usually around 8Mb per connection and a normal message is around 200-800 bytes.

Binder

The model of Binder was created for a microkernel-like device (side note, go read this wonderful article about the history of Danger written by one of the engineers at that company for a glimpse into where the Android internals came from, binder included.) The model of binder is very limited, inflexible in its use-cases, but very powerful and extremely low-overhead and fast. Binder ensures that the same CPU timeslice will go from the calling process into the called process’s thread, and then come back into the caller when finished. There is almost no scheduling involved, and is much like a syscall into the kernel that does work for the calling process. This interface is very well suited for cheap devices with almost no RAM and very low CPU resources.

So, for systems like Android, binder makes total sense, especially given the history of it and where it was designed to be used.

D-Bus

D-Bus is a create-store-forward, compose reply and then create-store-forward messaging model which is more complex than binder, but because of that, it is extremely flexible, versatile, network transparent, much easier to manage, and very easy to let fully untrusted peers take part of the communication model (hint, never let this happen with binder, or bad things will happen…) D-Bus can scale up to huge amounts of data, and with the implementation of kdbus it is possible to pass gigabytes of buffers to every connection on the bus if you really wanted to. CPU-wise, it is not as efficient as binder, but is a much better general-purpose solution for general-purpose machines and workloads.

CPU vs. RAM

Yes, it’s an over simplification of a different set of complex IPC methods, but these 3 words should help you explain the differences between binder and D-Bus and why kdbus isn’t going to be able to easily replace binder anytime soon.

Never say never

Ok, before you start to object to the above statements, yes, we could add functionality to kdbus to have some blocking ioctl calls that implement something like: write question -> block for reply and read reply one answer for the request side, and then on the server side do: write answer -> block in read That would get kdbus a tiny bit closer to the binder model, by queueing stuff in RAM instead of relying on a thread pool.

That might work, but would require a lot of work on the binder library side in Android, and as a very limited number of people have write access to that code (they all can be counted on one hand), and it’s a non-trivial amount of work for a core function of Android that is working very well today, I don’t know if it will ever happen.

But anything is possible, it’s just software you know…

Thanks

Many thanks to Kay Sievers who came up with the CPU vs. RAM description of binder and D-Bus and whose email I pretty much just copied into this post. Also thanks to Kay and Lennart for taking the time and energy to put up with my silly statements about how kdbus could replace binder, and totally proving me wrong, sorry for having you spend so much time on this, but I now know you are right.

Also thanks to Daniel Mack and Kay for doing so much work on the kdbus kernel code, that I don’t think any of my original implementation is even present anymore, which is probably a good thing. Also thanks to Tejun Heo for help with the memfd implementation and cgroups help in kdbus.

January 15, 2014 08:57 PM

October 13, 2008

Darrick Wong

I'm a Huge Dork

Thanks for local jazz station KMHD introducing me to Raymond Scott. In reading about him, I discovered that he wrote Powerhouse, an A-B-A composition. The A part is (perhaps) more commonly heard as one of NPR's interstitial pieces, and the B part is the cartoon assembly line piece. Give it a listen.

Anyone who's driven through the MacArthur Maze will appreciate its 1930s predecessor (looking west out towards the Bridge):

October 13, 2008 02:56 AM

September 11, 2013

Greg KH

binary blobs to C structures

Sometimes you don’t have access to vim’s wonderful xxd tool, and you need to use it to generate some .c code based on a binary file. This happened to me recently when packaging up the EFI signing tools for Gentoo. Adding a build requirement of vim for a single autogenerated file was not an option for some users, so I created a perl version of the xxd -i command line tool.

This works because everyone has perl in their build systems, whether they like it or not. Instead of burying it in the efitools package, here’s a copy of it for others to use if they want/need it.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
#!/usr/bin/env perl
#
# xxdi.pl - perl implementation of 'xxd -i' mode
#
# Copyright 2013 Greg Kroah-Hartman <gregkh@linuxfoundation.org>
# Copyright 2013 Linux Foundation
#
# Released under the GPLv2.
#
# Implements the "basic" functionality of 'xxd -i' in perl to keep build
# systems from having to build/install/rely on vim-core, which not all
# distros want to do.  But everyone has perl, so use it instead.

use strict;
use warnings;
use File::Slurp qw(slurp);

my $indata = slurp(@ARGV ? $ARGV[0] : \*STDIN);
my $len_data = length($indata);
my $num_digits_per_line = 12;
my $var_name;
my $outdata;

# Use the variable name of the file we read from, converting '/' and '.
# to '_', or, if this is stdin, just use "stdin" as the name.
if (@ARGV) {
        $var_name = $ARGV[0];
        $var_name =~ s/\//_/g;
        $var_name =~ s/\./_/g;
} else {
        $var_name = "stdin";
}

$outdata .= "unsigned char $var_name\[] = {";

# trailing ',' is acceptable, so instead of duplicating the logic for
# just the last character, live with the extra ','.
for (my $key= 0; $key < $len_data; $key++) {
        if ($key % $num_digits_per_line == 0) {
                $outdata .= "\n\t";
        }
        $outdata .= sprintf("0x%.2x, ", ord(substr($indata, $key, 1)));
}

$outdata .= "\n};\nunsigned int $var_name\_len = $len_data;\n";

binmode STDOUT;
print {*STDOUT} $outdata;

Yes, I know I write perl code like a C programmer, that’s not an insult to me.

September 11, 2013 10:22 PM

September 03, 2009

Valerie Aurora

Carbon METRIC BUTTLOAD print

I just read Charlie Stross's rant on reducing his household's carbon footprint. Summary: He and his wife can live a life of monastic discomfort, wearing moldy scratchy 10-year-old bamboo fiber jumpsuits and shivering in their flat - or, they can cut out one transatlantic flight per year and achieve the equivalent carbon footprint reduction.

I did a similar analysis back around 2007 or so and had the same result: I've got a relatively trim carbon footprint compared to your average first-worlder, except for the air travel that turns it into a bloated planet-eating monster too extreme to fall under the delicate term "footprint." Like Charlie, I am too practical, too technophilic, and too hopeful to accept that the only hope of saving the planet is to regress to third world living standards (fucking eco-ascetics!). I decided that I would only make changes that made my life better, not worse - e.g., living in a walkable urban center (downtown Portland, now SF). But the air travel was a stumper. I liked traveling, and flying around the world for conferences is a vital component of saving the world through open source. Isn't it? Isn't it?

Two things happened that made me re-evaluate my air travel philosophy. One, I started a file systems consulting business and didn't have a lot of spare cash to spend on fripperies. Two, I hurt my back and sitting became massively uncomfortable (still recovering from that one). So I cut down on the flying around the world to Linux conferences involuntarily.

You know what I discovered? I LOVE not flying around the world for Linux conferences. I love taking only a few flights a year. I love flying mostly in the same time zone (yay, West coast). I love having the energy to travel for fun because I'm not all dragged out by the conference circuit. I love hanging out with my friends who live in the same city instead of missing out on all the parties because I'm in fucking Venezuela instead.

Save the planet. Burn your frequent flyer card.

September 03, 2009 07:04 AM

May 22, 2013

Sarah A Sharp

Installing a custom kernel with USB 3.0 support

This documents my personal flow for downloading and installing a Linux kernel with my xHCI and USB 3.0 code. Until the code is in the upstream kernel and shipping in Linux distributions, you'll have to follow these directions to get Linux USB 3.0 support.

Read more »

May 22, 2013 06:13 PM

March 04, 2013

Twitter

March 01, 2013

Twitter

July 28, 2008

Kristen Accardi

What is a Plumbers Conference?

After spending a few days manning the Plumbers booth at OSCON, I thought I’d post the answer to the question that everyone seemed to want to know – What is a Linux Plumbers Conference? We came up with the word “Plumbing” to describe the low level infrastructure of a Linux System. This includes the Kernel, desktop infrastructure like X and graphics libraries, system utilities like udev and hal, as well as essential libraries like glibc and friends. These components interface with each other at times – some better than others. We hope to provide a forum for people from these types of projects to get together and try to solve problems that are system wide or cross multiple project boundaries.

In addition to the topics to be discussed in the microconfs and the general talks (see http://linuxplumbersconf.org/program/schedule/), we will have “unconference” style talks. We have several smaller rooms available for people to get together and work out specifics, talk about something they didn’t get on the schedule, or have a group hug. These rooms can be reserved at the start of the conference.

August 18th the registration fee for Plumbers will increase to $300. If you haven’t already registered, what are you waiting for?

by Kristen at July 28, 2008 05:12 PM

February 27, 2012

Linux Plumbers Conf

Plumbers Conference This Year

The 2012 Linux Plumbers Conference (LPC) will be held on August 29-31 in the Sheraton San Diego, and we hope to see you there!

To that end, the LPC Planning Committee is pleased to announce a call for microconferences. These microconferences are working sessions that are roughly a half day in length, each focused on a specific aspect of the “plumbing” in the Linux system. The Linux system’s plumbing includes kernel subsystems, core libraries, windowing systems, media creation/playback, and so on. For reference, last year’s LPC had tracks on Audio, Bufferbloat and Networking, Cloud, Containers and Cgroups, Desktop, Development Tools, Early Boot and Init Systems, File and Storage Systems, Mobile, Power Management, Scaling, Tracing, Unified Memory Management, and Virtualization.

Please note that submissions to a given microconference should not normally cover finished work. The best submissions are instead problems, proposals, or proof-of-concept solutions that require face-to-face discussions and debate among people from different areas of the Linux plumbing. In other words, the best microconferences are working sessions that turn problems into patches representing solutions.

Leading an LPC microconference can be a fun, exciting, and rewarding activity, but please see here for the responsibilities of a microconference working session leader. If you have an idea for a good LPC microconference, and especially if you would like to lead up a particular microconference, please add it to the LPC wiki here.

by paulmck at February 27, 2012 10:11 PM

April 04, 2005

Brandon Philips

New Useful Tools

Last week I found two tools that make my life better and make me look cool in front of my friends (j/k). So I thought I would share them.

Zebra Tele-scopic

Keeping bookmarks sync'd and accessible Back in the day I used to use a shareware tool to dump my IE bookmarks to html, then upload them via FTP, and then download them again and re-sync. But times have changed and del.icio.us is the new way to bookmark.

For those not in the know del.icio.us is a "social bookmarking" website. The first consequence is that your bookmarks are stored on a globally accessible webserver with an easy to remember URL like http://del.icio.us/philips. The second and more fun aspect is that when you make a bookmark (with one of the great del.icio.us bookmarklets) you can see who else has bookmarked the same page and what other sites may be related and of interest. From this feature I have found some great websites, including my new favorite techno radio station Radio ABF France.

But the coolest part is a plugin for Firefox called Foxlicious that allows you to sync your bookmarks from del.icio.us into a folder, organized by tags. It is great I can bookmark at home, and sync at work, then bookmark at work and sync at home, then; well you get the idea. Zebra Tele-scopic As you may already know I carry with me at most times an analog notebook (you know the paper kind). But I have never been able to find an inexpensive pen that is compact enough to keep in my pocket. Until my faithful run to the store last week where I found it! "It" being the Zebra Tele-scopic pen which is small enough to put in a jean pocket but telescopes into a regular sized and balanced ball point pen. Not only that but they are far cheaper than the Fisher Space Pen. At ~$5.49 US for two tiny telescoping pens with two refills these pens are a great deal!

April 04, 2005 12:00 AM

September 10, 2011

Linux Plumbers Conf

Plumbers Conference Next Year

Hello everyone,

Thanks for making this year’s plumbers conference such an enjoyable event.  Next year, we’re planning to co-locate Plumbers with the Kernel Summit and LinuxCon in San Diego from 29-31 August.  The current plan is that Plumbers and LinuxCon would run as parallel but separate events.  To accommodate the parallelism, we’re still planning on keeping the numbers for Plumbers down to 300 and having a separate registration from LinuxCon.  We’re also planning to move the refereed presentations track into LinuxCon itself as a hard core technical track which would still be selected by the Plumbers Programme committe (both Plumbers and LinuxCon attendees would be able to go to this). We plan to keep the two microconference tracks for plumbers only, but also add a third unconference type track, where people could plan meetings and split into discussion groups in a style very similar to Ubuntu Developer Summit (only Plumbers registered attendees would be able to go to this).

If you have any feedback about this plan, please sent it to the current programme committee at lpc2011@virtuousgeek.org

Of course, we’re also looking to recruit another organising and programme committee for 2013, so if you want to volunteer, please read this web page and then send your bid to the plumbers conference steering committee (who are also the Linux Foundation Technical Advisory Board) at tech-board@lists.linux-foundation.org

by jejb at September 10, 2011 06:57 PM

May 13, 2009

Nivedita Singhvi

Chairing LPC 2009

I’m doing a lot of things this year – and for the hundredth time, I find myself discarding my previous life and blogs and starting anew.

Last October, in a chocolate-induced haze of post Halloween self-satisfaction, I somehow thought it would be a good idea to volunteer to run the Linux Plumbers Conference in 2009. Portland continues to host the event, which makes it possible for me to help out, along with an extraordinarily strong local Linux development and business ecosystem. Last year’s crew did a heck of a job creating the event for the first time. We’re coasting on their toil and troubles, this year, frankly.

Despite the continual incoming dripdripdrip of somber economic news, tightening budgets, market collapses, layoffs, disappearing finances and individual anxieties, we are somehow crafting together what will be a rather interesting and productive conference.

The outstanding news today was we got a lot closer to signing up another big name for our keynote! It won’t get announced anytime soon, unfortunately, but it will be fantastic if we can get them.  We have already lined up Keith Packard, X Window genius and all round great guy to give one of the keynote addresses.

We will also have Linus giving an advanced tutorial on git.  It pains me to impose on Linus, but I’m personally very grateful that James Bottomley did the heroic arm-twisting for us.  After losing the video of Linus’s git tutorial in 2008, we badly wanted a chance to reassemble our dignity and geek cred.

If you’re a Linux developer in Portland, OR (or for that matter, anywhere else), what are you waiting for? Register already!

Linux Plumbers Conference will run from Sept 23-25 in 2009 at the Downtown Portland Marriott.


by vedisin at May 13, 2009 06:52 AM

November 25, 2007

Brandon Philips

suckless screen lock

A useful tool: slock is a tiny c program that locks your screen like xlock. But, with only 147 lines of very straightforward code it would be very difficult to introduce vulnerabilities :)

November 25, 2007 08:00 AM

February 18, 2009

Stephen Hemminger

Parallelizing netfilter

The Linux networking receive performance has been mostly single threaded until the advent of MSI-X and multiqueue receive hardware. Now with many cards, it is possible to be processing packets on multiple CPU's and cores at once. All this is great, and improves performance for the simple case.

But most users don't just use simple networking. They use useful features like netfilter to do firewalling, NAT, connection tracking and all other forms of wierd and wonderful things. The netfilter code has been tuned over the years, but there are still several hot locks in the receive path. Most of these are reader-writer locks which are actually the worst kind, much worse than a simple spin lock. The problem with locks on modern CPU's is that even for the uncontested case, a lock operation means a full-stop cache miss.

With the help of Eric Duzmet, Rick Jones, Martin Josefsson and others, it looks like there is a solution to most of these. I am excited to see how it all pans out but it could mean a big performance increase for any kind of netfilter packet intensive processing. Stay tuned.

by Linux Network Plumber (noreply@blogger.com) at February 18, 2009 05:51 AM

October 01, 2010

Sarah A Sharp

xHCI spec is up!

I'm pleased to announce that the eXtensible Host Controller Interface (xHCI) 1.0 specification is now publicly available on intel.com. This is the specification for the PC hardware that talks to all your USB 3.0, USB 2.0, and USB 1.1 devices. (Yes, there are no more companion controllers, xHCI is the one host controller to rule them all).

Open, public documentation is always important to the open source community. Now that the spec is open, anyone can fully understand my Linux xHCI driver (although it's currently only compliant to the 0.96 xHCI spec; anyone want to help fix that?). This also means the BSD developers can implement their own xHCI driver.

Curious what a TRB or a Set TR Deq Ptr command is? Want to know how device contexts or endpoint rings work? Go read the spec!

October 01, 2010 04:13 AM

September 25, 2010

Andy Grover

Plumbers Down Under

<p>Since the original <a href="http://www.linuxplumbersconf.org/">Linux Plumbers Conference</a> drew much inspiration from <a href="http://lca2011.linux.org.au/">LCA</a>'s continuing success, it's cool to see some of what Plumbers has done be seen as <a href="http://airlied.livejournal.com/73491.html">worthy of emulating at next year's LCA</a>!</p><p>LCA seems like a great opportunity to specifically try to make progress on cross-project issues. It's quite well-attended so it's likely the people you need in the room to make a decision will be <em>in the room</em>.</p>

by andy.grover at September 25, 2010 01:50 PM

September 10, 2010

Andy Grover

Increasing office presence for remote workers

<p>I work from home. My basement, actually. I recently read an article in the Times about <a href="http://www.nytimes.com/2010/09/05/science/05robots.html?_r=1&amp;pagewanted=1">increasing the office presence of remote employees with robots</a>. Pretty interesting. How much does one of those robo-Beltzners cost? $5k? This is a neat idea but it's still not released so who knows.<br /><br />I've been thinking about other options for establishing a stronger office presence for myself. Recently I bought a webcam. If I used this to broadcast me, sitting at my desk on Ustream or Livestream, that would certainly make it so my coworkers (and the rest of the world) could see what I was up to, every second of the workday. This is actually a lot <i>more</i> exposure than an office worker, even in a cubicle, would expect. If I'm in an office cube, I might have people stop by, but I'll know they're there, and they won't <i>always</i> be there.&nbsp; There is still generally solitude and privacy to concentrate on the code and be productive. I'm currently trying something that I think is closer to the balance of a real office:<br /><ul><li>Take snapshots from webcam every 15 minutes<br /></li><li>Only during normal working hours</li><li>Give 3 second audible warning before capturing</li><li>Upload to an intranet webserver</li></ul>I haven't found this to be too much of an imposition -- in fact, the quarter-hourly beeps are somewhat like a clock chime.<br /><br />In the beginning, it's hard to resist mugging for the camera, but that passes:<br /><img style="max-width: 800px;" src="http://oss.oracle.com/%7Eagrover/pics/blog/whassup.jpg" alt="whassup???" height="240" width="320" /><br />Think about how this is better than irc or IM, both of which <i>do</i> have activity/presence indicators, but which either aren't used, or poorly implemented and often wrong. How much more likely are you, as a colleague of mine, to IM, email, video chat, or call me if you can see I'm at my desk and working? No more "around?" messages needed. You could even see if I'm looking cheerful, or perhaps otherwise indisposed, heh heh:<br /><img style="max-width: 800px;" src="http://oss.oracle.com/%7Eagrover/pics/blog/cat1.jpg" alt="hello kitty" height="240" width="320" /><br />On a technical note, although there were many Debian packages that kind-of did what I wanted, it turned out to be surprisingly easy to roll my own in about <a href="http://github.com/agrover/pysnapper/blob/master/webcam.py">20 lines of Python</a>.<br /><img style="max-width: 800px;" src="http://oss.oracle.com/%7Eagrover/pics/blog/working.jpg" alt="working hard." height="240" width="320" /><br />Anyways, just something I've been playing around with, while I wait for my robo-avatar to be set up down at HQ...</p>

by andy.grover at September 10, 2010 05:20 PM

November 08, 2009

Valerie Aurora

Migrated to WordPress

My LiveJournal blog name - valhenson - was the last major holdover from my old name, Val Henson. I got a new Social Security card, passport, and driver's license with my new name several months ago, but migrating my blog? That's hard! Or something. I finally got around to moving to a brand-spanking-new blog at WordPress:

Valerie Aurora's blog

Update your RSS reader with the above if you still want to read my blog - I won't be republishing my posts to my new blog on this LiveJournal blog.

If you're aware of any other current instances of "Val Henson" or "Valerie Henson," let me know! I obviously can't change my name on historical documents, like research papers or interviews, but if it's vaguely real-time-ish, I'd like to update it.

One web page I'm going to keep as Val Henson for historical reasons is my Val Henson is a Man joke. Several of the pages on my web site were created after the fact as vehicles for amusing pictures or graphics I had lying around. In this case, my friend Dana Sibera created a pretty damn cool picture of me with a full beard and I had to do something with it.



It's doubly wild now that I have such short hair.

November 08, 2009 11:36 PM

August 23, 2009

Nivedita Singhvi

A Diamond in the Rough

If you’re ever in the state of Oregon, take the time to visit the  Rice Mineral Museum. I took a trip there today, and it was eye-opening, staggering, and simply wonderful. This is a world-class museum, a Smithsonian-level collection hiding out in the middle of nowhere, also known as the north end of Shute Road in Hillsboro. Their website and photo gallery simply do not do them justice.

Most of the pieces were so staggeringly beautiful that they far outdid commercial art that’s sold for megabucks. Amongst the very cool things, a slice of the collection comes from Pashan, Pune, one of the several places on this planet I call home.


by nivedita at August 23, 2009 11:31 PM

September 22, 2008

Kristen Accardi

Mission Accomplished

for real.

I couldn’t have been happier with how the Linux Plumbers Conference went last week. I went back and looked at the original proposal that we had Arjan, Greg, and Randy present to the Linux Foundation, and we seem to have hit all our original goals. From conception we wanted this to be a “working” conference – and from the conversations in the hallways that I overheard, to the discussions in the microconfs that went on, I could see that people were indeed getting together, discussing issues and solving problems. Conferences require a lot of time, effort, and money to do right, and it’s gratifying to feel that something useful will come out of this.

I think that now I can go back to blogging about duck poo and vegetables.

by Kristen at September 22, 2008 09:50 PM