Snoqualmie Falls at Flood
Update (January 2009): Just over two years later another flood event, and some more video of the falls and Spring Glen below.
Flood waters pour over Snoqualmie Falls.
Update (January 2009): Just over two years later another flood event, and some more video of the falls and Spring Glen below.
Flood waters pour over Snoqualmie Falls.
Posted by wac at 9:55 AM 0 comments
Our webcam now has a time-lapse movie for today and yesterday. Or rather it will have one for yesterday after it’s been running for more than one day. This was achieved by saving off a copy of every webcam image into a directory for the day and then processing them with some free tools. In this post I walk through the process we use to make this happen.
First, my Gawker image fetch project grew a new -d
directory option to save JPEG files into a directory. We have to save the images if we expect to make a movie out of them at some point. It should be noted this requires a bit of disk space. With a new image every 30 seconds, a day of images occupies nearly 150 megabytes of disk.
Now that we have the images, we want to turn them into a movie. For this we start with netpbm.
Using jpegtopnm
we convert the series of JPEG images into a series of raster images that the next software can deal with.
Next in line is mjpegtools. It has a tool ppmtoy4m
that takes the series of raster images and renders it into a “Y4M” video file. The Y4M format adds useful information to traditional YUV video data, such as the size of the image and the framerate. It’s worth noting that this Y4M data is mainly raw video frames, which results in a fairly large file if we write it to disk.
Lastly, we want to use x264
to convert the Y4M data into a highly-compressed movie that your can view with your web browser. The only problem is that x264
treats it’s input as raw YUV data by default and has no switch to make use of Y4M data, it only uses Y4M data if the input file is named something.y4m.
But we don’t want to save a very large Y4M file to our disk every 10 minutes when we’re generating the movie, only to throw the file away when we’re done. We want to just send it the input directly. All the other tools allow us to pipeline the commands together so as one command processes some data it passes it’s output straight on to the next command. So now it was time to fix x264
to let us pipeline the way we want:
I sent this message to the x264-devel list this morning.
This patch adds the long option “
--y4m-input
” which setsb_y4m
. It
also avoids any evaluation that might result in setting bothb_avis
andb_y4m
.
In order to politely handle pipelines from mjpegtools and other y4m
sources,b_y4m
must be set even if the filename is ”-”. Sadly, that filename
does not end in ”.y4m”.
The patch is available here.
In the longer term it would be useful to offer options for all
accepted input and output types on stdin and stdout to avoid having to
create (rather sizable) temp file everywhere.
So with that fix in place now we can send a series of JPEG files through the pipe of jpegtopnm
, ppmtoy4m
, x264
and wind up with a time-lapse movie you can watch.
Update: The files are now also mangled by the qt-faststart
program (part of ffmpeg) which will let QuickTime (and possibly other players) start playback immediately before the whole file is loaded. I’ll try to get this change committed to svn soon.
Posted by wac at 11:08 AM 0 comments
There was just an article on Slashdot about the new tool Hendrik Holtmann developed, smcFanControl, that allows more control over the minimum fan speed in Apple’s MacBook and MacBook Pro computers.
Apple sets the fan speed to 1000 RPM by default. This choice results in a surface temperature well in excess of what’s comfortable on both the bottom and top of the “laptop”. In fact it’s so warm that Apple advises users to “not leave the bottom of your MacBook Pro in contact with your lap or any surface of your body for extended periods” in the owner’s manual.
By using this new tool and setting the minimum to a more reasonable 2000 or 2500 RPM you can have a still quiet laptop that can actually sit in your lap comfortably. The effect on battery life should be negligible compared to the big power consumers (e.g. the display backlight, the hard drive).
Best yet, it’s open source, not another $10 piece of “shareware”, so everyone can use and improve it.
Update: There’s an even better program for handling this called Fan Control that is also free software, and has a handy pane in System Preferences.
Posted by wac at 11:42 AM 0 comments
Labels: computers
The TiVo has been recording CBS’s “Numb3rs” show for us for a while. Upon further investigation, and not totally unrelated to my last post, I’ve found they have a whole site at www.weallusematheveryday.com with practical mathematics exercises for high school kids (or older kids with rusty math knowledge) based on things mentioned on the show.
It’s good stuff. It would be interesting if one of the many medical shows focused just a little on teaching tools like this rather than obsessing over the love lives of the characters. If real medical professionals led lives like those depicted on every hospital drama I’ve ever seen on TV, our life expectancy would be much lower.
Posted by wac at 1:23 AM 0 comments
Labels: math
This post is a quick stroll down algebra lane to figure out whether holding on to something for the (more favorable) long-term capital gains rate really makes sense or not. I’ve discussed this sort of problem with a couple people and I’m recording my ideas for reference, posterity and criticism…
This evening I was thinking about the cliff in taxation between short-term capital gains and long-term capital gains. Something is “long-term” if held for a year or more, or if you’re forced to sell it because you’re hired into a position in the Bush administration. (No, I’m not making that part up.)
Short-term gains are taxed at the normal income rate which is 25 or 28% for most software developer/system administrator folks I know, and long-term is taxed at 15% in most cases. (These numbers may change at Congressional whim. And while I’m disclaiming things here: I am not your tax or investment consultant, you should consult one if you want advice you can hold someone liable for. Beware of dog. Slippery when wet.) I’m going to pretend the AMT doesn’t exist for purposes of this exercise. (If your capital came from stock options, the IRS sadly will not let you pretend the AMT away.)
This leads to a problem of deciding if the risk of waiting longer is worth the 10-13% difference in tax rate. The rate only applies to the gain, not the full amount. The amount the captial is worth in-pocket is the price minus whatever capital gains tax applies.
So a useful application of algebra:
Net = Price – (Price – Paid) × Taxrate
The net right now is probably going to be at the higher short-term rate. The market goes up and down a lot, and it’s no good if by the time the long-term rate applies the net is going to be less than it is now. You can’t necessarily time the market and know where it’s going, but some industries have pretty obvious seasonal trends. What price can the capital drop to before it’s a better deal to sell now? We know the net we can get right now because of the equation above. So now we need to find the price for another tax rate, so solve for Price:
Net = Price – (Price – Paid) × 15%
Net = Price – (15% × Price – 15% × Paid)
Net = Price – 15% × Price + 15% × Paid
Net – 15% × Paid = Price – 15% × Price
Net – 15% × Paid = (1 – 15%) × Price
(Net – 15% × Paid) ÷ (1 – 15%) = Price
So if I paid 6.50 for something that’s now worth 20.00 short-term, that’s equivalent to 18.41 long-term. That is if it is likely to drop in value more than about 2.50 by the time the one year “long-term” timer runs out,
In just 6 months time, you’ll need to net 2% more just to keep up with the current rate of inflation.
((102% × Net) – 15% × Paid) ÷ 85% = Price
So now I’m at 18.80 long-term vs. 20 now. If I think there’s an even money chance that it’ll lose 1.20 in 6 months, I’d be better off to sell now and invest in something that has a more favorable trajectory.
“Timing the market” is a foolish adventure, but blindly waiting for the long-term is an unnecessary risk if the value moves in a seasonally predictable way that’s not in your favor.
I’ve setup a Google Spreadsheet with this math in it for the curious.
Posted by wac at 10:51 PM 0 comments
Labels: economics
NPR had a story this morning about the Coast Starlate train that runs between Seattle and Los Angeles. It’s good to see press coverage of the problems this line is having. (e.g. From October 2005 through August 2006 the train delivered its passengers on-time only 2% of the time.) Maybe with enough coverage Union Pacific can be shamed into helping the trains run on time.
Posted by wac at 11:28 AM 0 comments
Labels: economics
There’s a new contest at MyDreamApp.com for new Mac OS X software. The format for the contest with celebrity judges and open submissions from the user community and prizes for the top 3 entrants sounds like a lot of fun. A lot of fun, that is until you read the fine print.
Participants are forbidden from making their dream app open source if it’s selected as a winner. That’s right, by participating in the contest you agree to give up all interest in your idea and any code you put behind it to the group organizing the contest. If you wanted to give back to the community by offering your awesome idea for free or, better yet, offering it as open source software, you’re out of luck.
But wait, there’s more; the deal gets even better. In return for giving away your idea they’ll offer you a meager 15% royalty from ”net income” from the Mac version of the idea. “Net” in this context means if they spend money advertising, promoting or paying developers/QA/writers to work on the Mac version of the product that gets subtracted from the amount you get 15% of. This is exactly how many musicians get screwed out of the proceeds from their music by record companies. “You know it cost a lot to promote your album, so here’s a free cup of coffee. We look forward to your next effort.”
Just to top it all off, some top finishers get free equipment. The catch? The taxes on the giveaway are your responsibility. So you have to pay out of pocket federal, state and local taxes (let’s say 28%) of the value of the “free” item. I’m going to guess that the 15% of “net income” may not be enough to cover the taxes on the free item.
Did I mention the liability release provision potentially allows the organizers to skip out and not pay you anything? And if they do pay you it will be through PayPal. And if your submission is accepted you may not even be able to claim it was your idea on your resume without their permission. And, worst of all, if the effort required for them to follow through on making your idea a shiny product is more than “reasonable” they can just walk away and develop it for Windows instead. (And pay you nothing.)
It’s an interesting way for some people to feed on the creativity of others, and I guess if people understand all this and still participate, that’s their business. I’m a little dismayed that Woz and other luminaries are participating in something that seems so potentially abusive of it’s “winners”.
This isn’t to suggest that the people organizing the contest will do any of these nasty unkind things, only that they could if they wanted to. I don’t know any of them, and I hope, for the sake of those who participate, they stay honorable.
It would be nice if someone had an open source-friendly contest (with nicer rules) along these lines for great new open source Cocoa titles in the tradition of Adium and Colloquy.
Posted by wac at 10:49 AM 1 comments
The pkgsrc distribution suggests (and offers tools for) making a fixed-size disk image for OS X and Darwin users. This not the best approach with OS X 10.4 and later.
The following command line incantation can create a resizable HFS+ disk image that is case-sensitive and will grow to fit what you need:
hdiutil create -fs HFSX -fsargs "-c c=64,a=16,e=16 -s" -volname pkgsrc -type SPARSE -stretch SizeOfYourDisk -size SizeOfYourDisk -ov /SomewhereOnYourDisk/pkgsrc
You can avoid silliness with your PATH
by adding symlinks:
ln -s /Volumes/pkgsrc/pkg /usr/pkg
ln -s /Volumes/pkgsrc/pkgsrc /usr/pkgsrc
ln -s /Volumes/pkgsrc/pkgdb /var/db/pkg
You can compact this image later to reclaim disk space with:
hdiutil compact /SomewhereOnYourDisk/pkgsrc.sparseimage
You can later use hdiutil resize
if you move it to a disk that is a different size. Mounting can also likely be automated to a large degree, maybe the lazyweb can chime in with ideas along those lines.
Update: Don’t, for the love of your files, put the disk image inside a FileVault home directory (or any other sparseimage disk image). I had to borrow a PowerPC Mac and buy a copy of DiskWarrior to get my files back. DiskWarrior apparently doesn’t work with ICBMs yet.
Posted by wac at 10:49 PM 3 comments
Labels: computers
I’ve made some improvements to Gawker Image Fetch and moved it over to Google Code Hosting.
If you checkout the copy from Subversion over there you’ll gain the ability to specify the Gawker instance with Bonjour/Zeroconf, as well as having a new --foreground
mode that fills your terminal with exciting debugging messages like:
Saw service ‘Unprivileged Clown._lapse._tcp.local.’
Source moved to 1.2.3.4:7548
Replacing connection
Connecting to 1.2.3.4:7548
The new Zeroconf features appear to work in both OS X and FreeBSD. Theoretically it should be able to follow the camera to a new IP if the camera machine reboots.
I’m still seeing an occasional bug that requires restarting the script in the install for the webcam. Hopefully the additional debugging information will help me get to the bottom of that.
Let me know if it works for you…
Thanks to the developers of pyzeroconf for that library. It has a few bugs which I’ve fixed for this script and haven’t submitted back yet.
Update: I sent a patch along by email to pyzeroconf interested parties.
Posted by wac at 3:21 PM 0 comments
I got a most unexpected letter in the mail today. It started out like this:
Dear Mr. William A. Carrel,
Our records show that you haven’t yet registered for the benefits of AARP membership, even though you are fully eligible.
Oh, really?
I’m having a pretty good time of things here in my late-20s. I realize that the AARP is opposed to extending the retirement age for Social Security, but this is really a bit over the top.
Apparently I’m not the only one in their 20s that AARP has sent a letter to. A quick Google search for Our records show that you haven’t yet registered for the benefits of AARP membership, even though you are fully eligible
turns up a whole host of 20-somethings that have received these letters.
So what I’d really like is “their records.” No, really, I want a copy of the records they’re apparently keeping on me that indicate I’m more than 30 years older than I actually am.
My guess? There are no such records. The AARP is lying. “Our records” probably means “Choicepoint’s records”. You know, the same records that determine your creditworthiness. Those records are also mined by the NSA (along with your phone records) to take a guess at whether you’re a terrorist.
The people keeping these records can’t even place me within the right generation. God only knows what else they (incorrectly) think about me?
If I wind up in Gitmo and you want to locate me, ask for the young looking 50-year-old.
I’ve put an image of the letter online.
Posted by wac at 9:12 PM 0 comments
The Seattle University School of Law keeps a master calendar on their website where students (and faculty) can check on important dates in the coming terms. Unfortunately, this calendar is just in static HTML and you can’t easily import it into a tool you might want to use to help keep track of your days. (I hear that time management can be important in law school.) The master calendar isn’t in your Seattle U. OWA account by default either.
With the obvious disclaimers: (well it is a law school after all)
I’ve generated a Google Calendar for SU Law that is available in a variety of formats:
This is mainly for our personal use, but you’re welcome to use it if you find it useful.
Posted by wac at 11:01 AM 0 comments
Labels: law school
There’s not a lot more to it than the title suggests. I’ve made a script you can use to fetch webcam images from any host on the Internet that’s running Gawker.
The script takes arguments of the host and port to connect to as well as the file to overwrite with the latest exciting image. Together with some javascript you can do neat things like have a live updating webcam image.
Silly things like half-written images are dealt with, but that means the directory that has the image file in it needs to be writable. If the camera sends images faster than your local machine can write out to disk, that’s bad. So don’t do that.
In the future, I’ll make changes that fulfill desires I have, like producing a timelapse movie of the last 24 hours. And more complicated, a timelapse movie that skips the dark frames in the night that are less interesting.
After a few minutes of thought: Future changes like reconnecting in the event of a failure spring to mind as well. For now though, this is scratches-an-itch-ware.
Update: My script now resides at Google Code with the code available in Subversion at http://gawker-image-fetch.googlecode.com/svn/trunk/. It now supports finding cameras with Zeroconf and reconnecting automatically.
Later Update: New features are appearing, like support scripts for doing time-lapse movies. Check the code site for the latest information.
Posted by wac at 10:45 PM 4 comments
I hope you’ll pardon the dearth of posts recently. In case you hadn’t heard, I’ve started at a new job as a Site Reliability Engineer for Google.
After a handful of weeks in Mountain View, I’ve made the 875 mile drive back home. Some highlights of the trip down and my free time down there will trickle into the gallery over the coming days.
Posted by wac at 10:58 PM 0 comments
Labels: computers
We have a new test webcam pointed at Mount Si. I’ve always been
bothered by the bizarre behavior of the QuickCam when it catches the morning
sun. Using an iSight may help this since it theoretically is smart enough to do
some aperture adjustment on its own. On the other hand, it’s ability to pick out
details in the dark has proven to be somewhat more sketchy.
It is worth noting that the new webcam setup I’m testing is using only open
source software (aside from the operating system). Or at least it will be once
I’m happy with the performance and release the script I’m using to fetch images
that are served with Gawker.
The resolution of this camera is about the same as the QuickCam, but the image is a lot sharper. I have a feeling that the image tag on that page isn’t shaped quite right for the size of the image that is output from that camera. But that’s why it says “testing” all over the page. A chance to work the kinks out.
def ParseDescription(self, desc_buffer):
pass
We’ll see if that’s premature optimization some time soon…
Posted by wac at 10:22 PM 0 comments
The DSL link was upgraded from 1.5Mb/s down and 256kb/s up to 3Mb/s and 512kb/s respectively this morning. Which reminds me that getting the most out of the link requires some work. Daniel Hartmeier has a nice write-up on prioritizing ACKs which is good idea on PPPoE links so that you can upload and download simultaneously without killing your throughput one way or the other.
PPPoE especially seems to have issues with empty ACKs getting caught in buffers and having to wait 1 second or more to be fed through and out. The following user PPP setting in FreeBSD is of particular interest.
set ifqueue packets
Set the maximum number of packets that ppp will read from the
tunnel interface while data cannot be sent to any of the available links. This queue limit is necessary to flow control outgoing data as the tunnel interface is likely to be far faster than
the combined links available to ppp.
If packets is set to a value less than the number of links, ppp
will read up to that value regardless. This prevents any possible latency problems.
The default value for packets is “30”.
If we’re uploading a lot of data, those 30 default packets will tend to be the maximum MTU size (generally 1492 for PPPoE). That’s 44KBytes of data waiting around to be transmitted, or 358kbits. If the link is only pushing 256kbits each second, new items in the queue have to wait nearly a second and a half to get out. If your TCP ACKs for downloads start getting delayed by that long, the throughput on that connection is going to suffer and VoIP or gaming is simply out of the question.
With faster links, this default is somewhat less problematic. Our new 512kb/s link still has a buffer of 700ms or so. Still more than enough to confuse the heck out of the in-flight packet management in TCP when that buffer wait starts fluctuating wildly.
I suspect if ifqueue
was set very small and some sort of weighted fair queuing was used to manage its input, that total performance would be better. And it would avoid having to guess the optimal upstream bandwidth the way you do with the suggested approach with pf.
Posted by wac at 8:39 AM 1 comments
Labels: computers
Traditionally I’ve used DarwinPorts for managing the packages installed on my Mac laptop. Sadly, there has been some falling out between the OpenDarwin folks and Apple recently. For this and other reasons having DarwinPorts build Mac OS X-friendly Intel binaries cleanly (let alone universal binaries) seems highly unlikely.
In order to produce things that’ll work on both PowerPC Macs and ICBMs (Intel-CPU based Macs, what an acronym) you have to build it yourself or find someone else who has taken the time to do it.
The Mac Python crew have been working for a couple months now on a universal Python 2.4.2 install that passes its own regression testing (unlike the one that Apple ships). They have a copy of the installer .dmg available online. (There is also a universal Ruby online if you swing that way.)
One of the problems I’ve encountered, after using these builds, is that the universal Python really wants its C-based modules compiled as universal builds also. It’s not such a big deal when the modules are standalone (and pay attention to the CFLAGS
that the Python build was made with). It is more problematic when they want to link against other things, which then must also be built universal.
In today’s adventure, I wanted to build the psycopg module that is a way to access PostgreSQL databases from inside Python. This meant making a universal build of PostgreSQL. Marc Liyanage has some universal builds available, but they were for 8.1.2 rather than 8.1.3. His ViewCVS is install is broken right now, but I was able to get the Perl snippet from the Google cache to get the build to succeed.
PostgreSQL actually builds remarkably easy aside from an oft-repeated bit where it takes the myriad object files (.o) from each smaller component and makes them into one larger object file. Apparently ld
doesn’t know how to consolidate “fat” object files. To workaround this you have to run ld
once for each architecture (ppc, i386) then lipo
the resultant files together. Marc’s one-liner takes care of this change in the various makefiles.
find . -name Makefile -print -exec perl -p -i.backup -e 's/\Q$(LD) $(LDREL) $(LDOUT)\E (\S+) (.+)/\$(LD) -arch ppc \$(LDREL) \$(LDOUT) $1.ppc $2; \$(LD) -arch i386 \$(LDREL) \$(LDOUT) $1.i386 $2; lipo -create -output $1 $1.ppc $1.i386/' {} \;
Psycopg ignores CFLAGS
given to it during configure
and took a slight bit of monkeying around to use the magic flags that are in the Apple documentation to build. Other than that it went smoothly though.
I stuck the PostgreSQL build into /Library/Frameworks/PostgreSQL.framework
for good measure. I’m not sure if that’s a good or bad idea, but it does keep it from getting stomped on by native builds done in OpenDarwin or Fink or whatever.
It’s about 8MB all wrapped up so I’m not going to post it here, but if people really want it maybe I can work something out. That’s all for now…
Posted by wac at 6:36 PM 0 comments
Labels: computers
Yes, the joys of having a Rev. A hardware product from Apple are boundless. My new MacBook Pro seems to have a problem dealing with USB audio output devices. According to a reply I got on Apple’s discussion boards this affects Intel-based iMacs as well.
Thanks to the fact that Apple won’t open the xnu (Mac OS X kernel) sources for the Intel systems, I can’t effectively chase down the issue myself (potentially saving some beleaguered Apple engineer some work rounding up a suitable test rig). So I’ve resorted to filing a bug with the Apple’s developer feedback interface: 4460973 – USB Audio Output Fails on Intel Macs. I’ve never used this before and my previous experience with Apple’s bugfix process was lackluster.
Feb 25 21:57:09 macbook kernel [0]: USBF: 89320.749 IOUSBPipe[0x4244600]:ClosePipe for address 6, ep 1 had a retain count > 1. Leaking a pipe
I always figured there would be some sort of minor issue like this, being Rev. A hardware and all. On the whole the machine is great, but I imagine this would be a serious bit of sadness for DJ types that weren’t just using the built-in outputs. USB audio output devices work if, and only if, the device was plugged in at boot and the machine hasn’t been in “sleep” mode between then and now. So just reboot, over and over and over again! I’d just as soon leave that fun usability experience to the folks with that other operating system.
It’ll be interesting to see how quick Apple can get a fix for this. Since it affects iMac users as well it’s most likely a problem with the USB controller or the driver for it. However, it’s hard to say for sure without digging a lot deeper, and dig_deeper() is returning ENOXNUSRC.
Posted by wac at 5:38 PM 0 comments
Labels: computers
I recently received my new MacBook Pro and have been slowly working through getting all the programs I want running nicely. Native Intel code screams compared to the PowerBook I had been using previously, but sadly not all programs have been rebuilt for both platforms yet.
SSHKeychain is an extraordinarily useful program, but is missing a “Universal” (Intel + PowerPC machine code) build. Luckily it is open source, so I was able to simply build it myself. You can get a copy with this link. It is subject to a BSD-style license which is included in the zip file. If you’re nervous about .zip files after the recent virus shenanigans there is also a detached signature.
Many thanks to Bart Matthaei who originally developed this indispensable tool.
Posted by wac at 3:24 PM 0 comments
Labels: computers
Important Disclaimer: This is our personal website. The views, thoughts, text and images expressed on these pages are our own and not those of our employers, the universities we attend, the United States Government, ABC, or Major League Baseball.
© 1997-2008 Carrel.ORG