"The more I find out, the less I know."

Leopard First Impressions (and Critical Bug Alert)

Sunday - October 28, 2007 12:13 PM

I admit it, I was one of those people who bought Mac OS X 10.5 Leopard on the first day. Here are my first impressions.
First, if you're upgrading to Leopard, there is a critical bug in the system which shipped on the DVD. Under some conditions, this bug can cause the Keychain manager to wipe out the user password, making it impossible to log in. If the user account is also the admin account (as is the case on most single user machines), this will also make it impossible to install system updates, manage other accounts, etc.

After upgrading to Leopard, run Software Update immediately, before running any other programs, especially Mail and Safari (both of which make heavy use of the keychain). Apple has published a fix, and running Software Update right away will prevent the bug from affecting you. You should "Cancel" any requests to access the Keychain before installing the update.

If you do get bit by the bug and find yourself unable to login, it is possible to reset your password with about five minutes effort. Here are the official instructions for recovering a lost password after a Leopard upgrade. It is not difficult, but it requires booting into UNIX single user mode, and entering some obtuse command lines. Just follow the instructions precisely as written and you'll be fine.

Apparently this bug affects accounts which were originally created on 10.2.8 or earlier, even if that account was originally created on a different machine. So if you've been migrating your accounts through a couple generations of Macs, there's a good chance you're vulnerable. She Who Puts Up With Me is a software QA professional, and she said she can see how this bug slipped through: there's a limit to how many generations of hardware and software you can test through, and this is one of those crazy edge cases which is rare in the development environment, but not that unusual in the field.

Other Impressions
Overall, this is probably the "dirtiest" OS X release I can recall, and I've been on OS X since 10.1. In addition to losing my login password, several of my system and Mail preferences were reset (minor inconvenience), permissions for personal web sharing got screwed up (still not fixed yet), and I had to reset my local WiFi network settings (also a minor inconvenience). In other words, don't plan on this being as simple of an upgrade as earlier version. You will have to do some tweaking.

That said, I also think this is one of the most worthwhile OS X releases I can recall. It's worth the nuisance (and in my case, pain) of the upgrade for a single new feature: Time Machine.

I've been struggling with backups on my computers for years, trying to find a system which would be (a) painless, (b) comprehensive, and (c) completely transparent. Time Machine delivers on all three. Just turn it on, and that's it. Forever more, the system will keep incremental backups every hour, and give you a slick tool to recover lost files (or the entire system, if it comes to that).

My one complaint about Time Machine is that the user interface is really cheezy. On the other hand, as John Gruber argues, that might be a good thing, since the cheez-whiz UI will get people to actually turn it on, and once on, it will run silently in the background forever, defending the user against hardware failure and accidental deletion.

As an aside, since Time Machine backs up everything, this is likely to hand a new forensic tool to those investigating computer crimes and kiddie porn cases, I'm just sayin'. You can specify a list of files and folders to exclude from Time Machine, however, so if you have something you don't want the FBI to know about, you should take advantage of that. The "Secure Empty Trash" feature will not remove the Time Machine archives--though it probably should.

Anyway, I went out and bought a cheap 1TB external drive array (less than $300), divided it into two partitions for my laptop and my wife's, and installed it as a shared drive on our iMac. The shared partitions work perfectly as backup drives, and now whenever our laptops are on the home network we have backups. Utterly painless: we don't have to do anything other than connect to the WiFi.

Plan on letting the computer run overnight for the initial backup, though. It takes a while.

One feature I would like to have in Time Machine is the option to create multiple backup volumes. My laptop spends about as much time at the office as at home, and it would be useful (not to mention good practice) to have a backup drive in each location. Since Time Machine is so painless to use, I can see this as a very reasonable thing to do. However, as near as I can tell, Time Machine doesn't permit more than one backup volume.

Other Nice Stuff
Since we have four Mac OS machines in our home (laptop for each of us, Mini for the kids, and an iMac which we use as a home server), networking and sharing is a fairly big deal. I'm really happy to see the return of shared folders/drives. What's more, the interface for file sharing is simpler and easier than ever.

Having a self-cleaning Guest account is really nice. Haven't used it, but I'm sure I will.

I haven't had a chance to play with the new Parental Controls yet, but I'm sure I will. That feature gets a lot of workout on the kids' machine.

I also haven't had a chance to play with spaces yet, but that's also going to be a useful feature. Now we just need someone to hack the motion sensor on newish laptops so you can change screens by smacking the side of the computer.

On the downside, I really don't care for the translucent menu bar, and the glowing dot on the dock for active apps (replacing the black triangle) is hard to see. Also, having open Finder windows seems to mysteriously suck CPU from time to time. Closing the window fixes the problem, but I'm not sure what exactly the processor time is being used for.

Posted at 12:13 PM | Permalink |

Three months of iPhone

Thursday - October 18, 2007 04:45 PM

Without much fanfare, my iPhone passed its three month birthday a few days ago. That's long enough for the honeymoon to be mostly over (at least, it was with my earlier smartphone). So what do I think?
First, this is the first phone I've owned in many years which I didn't actively hate after three months--the last phone I really liked for a long time was a Nokia 8000-series phone from about 1997 or 1998. I don't remember the model, but it was tiny, chrome, and had a nifty slide-out keypad cover. So the iPhone passes the not-loathing test, which is unusual right there.

On the positive side:

1) The web browser works really well. The browser in my Treo sucked canal water, so I never used it. I use the iPhone for surfing the web all the time, and I never thought I'd get this addicted to having a browser in my pocket.

2) Google maps is awesome, even without GPS. The best part is live traffic information, which is really helpful during rush hour. If a future revision includes GPS, then the maps function could be a killer product all by itself.

3) YouTube is fun, and even though I don't use it all that much, the kids do. It's sometimes a useful way to distract them so I can get something else done.

4) Video podcasts are a great way to pass a long flight. The big screen makes the iPhone almost as good as one of those portable DVD players, but smaller and fits in your pocket.

5) This is still the coolest gizmo around. Even after using it for three months, it still seems like something which dropped through a time vortex from three hundred years in the future.

Observations which are neither positive nor negative:

1) Stocks, weather, notes....meh. I use them very occasionally, but if I had the option, I'd probably want to ditch those in favor of something else.

2) I'd say I'm about equally as clumsy on the touchscreen keypad as I was on my Treo's mini keyboard. I've come to rely on the autocorrect feature, though, and every now and then it won't correct something and I'll text She Who Puts Up With Me something like "I'm kwabumf the office now."

3) The iTunes ringtone maker sucks, but Ambrosia's iToner now works with version 1.1.1, and that rocks. I didn't really care about ringtones until Apple messed it up so badly.

4) I don't really care about unlocking or jailbreaking my iPhone, so the whole kerfuffle about version 1.1.1 really didn't matter to me.

5) The $200 price drop also didn't matter to me, but the $100 credit was pretty cool. We used ours to buy a nice pair of Bose ANR headphones.

Negative stuff:

1) No to-do list? Oh wait, I never used the to-do list. Nevermind.

2) Since 1.1.1, the iPhone won't automatically connect to our home wifi network: we have to go into Settings and choose it manually from the list of available networks. This may be because our home network doesn't broadcast its SSID.

3) Safari is still unstable, and crashes every now and then. On the other hand, the crashing is so very subtle you sometimes don't even notice it: in the worst case it drops back to the home screen, and sometimes it simply has to reload the open pages when you select Safari.

4) Mail is cumbersome, but mostly because it lacks just a couple of features. If it did rules filtering and junk mail filtering and made it easier to delete multiple messages, that would solve most of my complaints. It's still better than on my Treo, which froze regularly when I tried to read mail.

5) Some sort of copy-and-paste function would be very helpful.

Posted at 04:45 PM | Permalink |

iPhone First Impressions

Saturday - June 30, 2007 01:22 PM

I stopped by one of the local Apple Stores this morning to get a glimpse of the almost-mystical iPhone.
I came away impressed. I'm fairly confident that when my old Treo dies (and it will soon--only three of the seven screws holding my Treo together remain) it will be replaced by an iPhone.

It's not that the iPhone has any features my Treo lacks. In many ways, the Treo is more feature-complete. But Apple has once again figured out that what a lot of people want is not features but useful features.

For example, I almost never used the Web browser or e-mail on my Treo because they suck. E-mail I gave up on almost immediately because it consistently crashed my Treo, and was useless for navigating my inboxes in multiple accounts. The web browser is more stable, but made navigating real web pages a painful experience which was almost never worth it.

iPhone seems to have solved both problems.

I also came away with the impression that this is clearly a 1.0 product (though a really awesome one). Some basic features are inexplicably lacking--for example, while I could program it to dial a series of digits after calling a phone number (useful for automatically logging in to a corporate voicemail box), I could not get it to wait for user input before dialing the extra digits. Others have commented on the missing copy/paste functionality, and the fact that you can't do instant messaging. The lack of IM is possibly AT&T trying to protect its lucrative text messaging business, but I suspect the noise to add it will quickly become loud enough to force Apple to add IM in fairly short order.

(I also suspect it won't be long before people get IM to work on the iPhone through a browser-based client.)

There's no doubt in my mind that the iPhone is a game-changer. Despite its flaws, all other smartphones are awkward and clunky in comparison, bloated with too many tiny buttons and features made useless through interface obscurity. And just as the original iPod eventually yielded the Nano and Shuffle, it's exciting to contemplate where this product will be in five years.

Posted at 01:22 PM | Permalink |

Ruby on Rails Impressions

Tuesday - June 12, 2007 09:47 AM

I've had some time to dig a little more deeply into Ruby on Rails, and I'm still impressed.
I've been building a pilot logbook application, and in just a few hours of work (maybe 7-8 hours so far, more than half of which was just getting back up to speed on programming and object-oriented techniques) I've built a nearly feature-complete, if basic. I've even had time to add some HTML decoration and formatting, and do a fair bit of user interface refinement.

The coolest thing about Ruby on Rails, though, has been the support for generating "scaffolding," basic interfaces for creating, updating, and deleting database entries. The scaffolding itself consists of only a few lines of auto-generated code (with lots of stuff running invisibly in the framework), but it radically changes the development process.

The magic is that all you need to do is define the stuff you want to store in the database, and from that point forward you have a functional program. It may be crude, it may not provide the features or user interface you want, but it works. That means that as you develop, rather than having to write 80% of the program before you can start actually using (and testing) it, you can start using and testing it almost immediately. Almost the entire focus shifts from working to get basic functionality, to adding features and refining the interface.

That's not just more efficient, it's also a lot more rewarding. There's less of an initial hump to get over before you feel like you've gotten something done.

As an example, take a basic general ledger program. Traditionally, you might start by defining a general ledger database, then write the business logic and create a user interface for entering transactions. There's nothing functional, though, until both the business logic and user interface are significantly complete.

Under Ruby on Rails, you start by defining the fields you want to store in your general ledger (most basically, a general ledger entry will have an account to credit, an account to debit, and a dollar amount). As soon as that's done, you have (for free!) a crudely functional general ledger package. The user interface will suck, and it won't run reports or validate entries, but it will exist and be functional. So then you scratch your chin and say, "I don't like this interface for adding a new ledger entry, I'll replace it with a new one." When that's done, you scratch your chin again and say, "It needs to make sure the credit and debit amounts match," or "I need to be able to generate the balance for any account on any date," and you write that. Eventually you work your way down to details like tracking receivables, calculating depreciation, and so forth, but there's never a point in the process where you don't have working code.

This iterative development cycle is nothing new for those wise in the ways of Agile Development, though I think Ruby on Rails dramatically cuts the time required to get to first base.

Posted at 09:47 AM | Permalink |

E-mail is broken

Tuesday - May 29, 2007 04:10 PM

Fred Wilson touched on a topic I've been thinking about for a couple years now: e-mail as we currently know it is broken, but how can it be fixed?
There's no lack of e-mail alternatives: IM, SMS, web-forums, and blogs all replace at least some of the functionality of e-mail. But nothing seems well positioned to step up as the next generation of ubiquitous messaging service.

A big part of the problem is that the very feature which makes e-mail so useful--the ability to send and receive messages to and from anyone--is the same feature which makes it so vulnerable to spam.

It wouldn't be hard, for example, to set up a closed e-mail network where you will only get messages from your pre-approved "friends." As Fred correctly points out, many social networks already include this feature, and spam is generally not an issue on those networks.

Simply requiring a sender to ask to be a "friend" really just pushes the problem to a different level, though. A spammer could just as easily send millions of "be my friend" requests and be as much of a headache as they are with spam e-mail. Worse, it wouldn't do much about phishing schemes, since those could be crafted to look like legitimate requests from real companies.

The fundamental dilemma is as follows:

A) No replacement e-mail system will take off unless it's ubiquitous. The fact that anyone can communicate with anyone is the main reason the current e-mail system still dominates, rather than being replaced by some other (presumably closed) messaging system.

B) Companies and ISPs won't adopt a new messaging system unless they have the ability to administer their own accounts--a central Ministry of E-mail Management won't fly.

C) But as long as companies can create their own accounts, spammers can do the same to generate tons of spam (or possibly "be my friend" messages) from millions of unique addresses.

D) The only way to prevent the spammers is thus to break the anyone-to-anyone communications in e-mail.

So to fix e-mail, you either need centralized account management or you need to limit the ability for anyone to send a message to anyone else. The former is unacceptable to the businesses which actually run e-mail servers, and the latter limits the network effect which makes e-mail useful.

There are a couple of technical solutions which I could see as breaking the logjam:

1) A reliable, unobtrusive way to distinguish human- from machine-generated messages. The current "challenge-response" systems are very effective, but too obnoxious for widespread adoption. If you could figure this out, though, it would make it possible to apply different rules to messages depending on the sentience of the sender.

2) It may be possible to modestly limit the "anyone to anyone" messaging feature of e-mail in a way which blocks most unwanted messages without getting in the way of human senders. Whitelisting and Blacklisting are forms of this approach, but not powerful enough by themselves to solve the problem.

Posted at 04:10 PM | Permalink |

Security through Segmentation

Wednesday - February 14, 2007 02:05 PM

Here's an interesting article about the security model for the $100 laptops. Basically, they're relying on extreme segmentation--each application plays in its own sandbox, so malicious software can't do anything to the system or applications even if it gets installed.
It also looks like they're using secure application signing as a ways to give applications access to resources outside their sandbox--possibly a weakness (if they key is ever compromised), but better than what anything else does today.

I've thought about this idea in the past, and it's very sensible. Very few applications actually need to access data and resources from other applications, and allowing open communications creates a lot of opportunities for mischief.

Posted at 02:05 PM | Permalink |

Prototypical Conversation

Wednesday - January 24, 2007 04:25 PM

She Who Puts Up With Me works in Quality Assurance at a software company. She used to run technical support, and still holds on to a few tech support duties.
My company, on the other hand, does a fair amount of work in usability testing--particularly in automated customer service systems.

About every six months or so, we have a conversation exactly like this:

SHE: You wouldn't believe this crazy customer we've got. He's upset because our software does [some reasonable but mildly counterintuitive behavior]!

ME: Maybe he's got a point. Wouldn't it make more sense for the software to do [a more intuitive but harder to implement behavior].

SHE: But would it really kill this guy to do [two or three step process to accomplish something which could be a single step]?

ME: Probably not, but you can't expect everybody to be a power user.

SHE: They should at least be smarter than a brick. How come you never give me any support on these things?

ME: Just put "User must be smarter than a brick" on the System Requirements. Then you can tell the customer that he doesn't meet the minimum system requirements.

Posted at 04:25 PM | Permalink |


Wednesday - January 10, 2007 08:22 PM

So, let me get this straight.
For probably three years running, rumors about Apple making a mobile phone have been running wild. This rumored gadget is universally dubbed the iPhone, if for no other reason than its the obvious name.

Three weeks ago, Cisco announces a VoIP handset they call the iPhone. Rumors of Apple's iPhone persist, though most people think they'll rename it.

Yesterday, Apple really does announce a mobile phone, and they call it the iPhone. Cisco announces that they're negotiating with Apple so that Apple can license the iPhone name.

Today, Cisco sues Apple to stop Apple from using iPhone.

This is one of those stories where there has to be more to it than meets the eye.

Here's what I think is going on....

Cisco acquired the "iPhone" trademark in 2000 when it acquired Infogear, which had been selling a line of iPhone-branded products. Apparently, though, Cisco had not been selling anything called iPhone for several years.

Apple almost certainly knew this, since their lawyers certainly found the original trademark issuance, and also researched the subject enough to know that the trademark had fallen into disuse. So Apple went ahead with the iPhone name--at least partly because it was the obvious choice, and partly because everyone was already calling it iPhone--figuring that they could pay Cisco a few million dollars and get the trademark transferred.

Meanwhile, Cisco saw that the iPhone name was also both the obvious choice for Apple, and that everyone was already using the name. So they figured that the iPhone name should be worth rather a lot to Apple, and added a couple zeroes to the price Apple offered.

Apple, of course, balked. Especially since (as they no doubt pointed out to Cisco), Cisco hadn't actually been selling anything called iPhone for quite some time. So if it came to a lawsuit, Cisco's trademark could be nullified for disuse. Under American trademark law, it isn't enough to be issued a trademark: you have to actually use the trademark and defend it from infringement if you want to keep other companies from using it.

So Cisco quickly rebranded some other new product as iPhone and rushed it to market in order to maintain the trademark. It was important that Cisco's iPhone be introduced before Apple's iPhone, otherwise Apple would have a much stronger claim that Cisco wasn't actually using the name.

That's why we saw Cisco announce an iPhone just weeks before Apple. They needed to shore up their bargaining position before Jobs' big speech. It doesn't matter that practically everyone with a live Internet connection thinks of Apple--not Cisco--as the iPhone company. All that matters is that Cisco can claim that Apple stole its product name.

Now that it's all out in the open. Cisco is irritated that Apple won't pay up for what is (to Apple) a very valuable trademark. Apple is annoyed that Cisco is playing games to extract extra money from what had been a moribund name. And now they're taking it to court.

The conventional wisdom is that Cisco's going to win in court. But I think Apple has a better chance than most give it credit. It is possible to invalidate a trademark by showing that the name has become generic--which is only a small step from showing that the name is not associated in most people's minds with the product claiming the trademark. Apple will have an easy time proving that (a) basically nobody knows about the Cisco iPhone, while the whole planet knows about Apple's version; (b) Cisco had not used the name for a long time until suspiciously introducing an iPhone immediately before Apple; and therefore (c) that Cisco was attempting in bad faith to reclaim a trademark which it had stopped using, solely in an attempt to extract more money from Apple, the company universally associated with the iPhone name.

Of course, I have no idea if any of this is true. So my opinion is worth what you paid for it.

Posted at 08:22 PM | Permalink |

Anticipating Apple's iPhone

Tuesday - January 09, 2007 06:36 AM

Everyone is anticipating that Apple will announce a mobile phone today, probably in partnership with Cingular (as reported in the Wall Street Journal).
I'm having a hard time getting excited. The mobile phone business just doesn't seem conducive to Apple-style "it just works." The customer service stinks, coverage is often spotty, carriers have a vested interest in selling phones with too many features so they can sell add-on services, and the result is a complicated customer-hostile mess.

I'd be happy to be surprised. But I've come to the conclusion that the problems with the mobile phone business are structural, and not even Apple can fix them. So this could wind up being just a highly-anticipated disappointment.

Posted at 06:36 AM | Permalink |

Zune to be gone?

Sunday - November 26, 2006 07:53 AM

Can it really be almost three years since I first wrote about Microsoft's lame attempt to develop a device which would dethrone the iPod as king of music players?
Back in January of 2004 I was writing about Microsoft's Portable Media Center, which tried to out-iPod the iPod by doing video. Six months before the device actually hit the shelves and proved me right, I concluded that Microsoft was on completely the wrong track because (a) music, not video, is the killer app for music players, (b) consumers really, truly don't want a Windows interface on a piece of consumer electronics, and (c) the hardware and software need to be tightly integrated, not made by two different companies.

That was then, this is now, and now we have the Zune. Microsoft seems to have learned a lot this time around, but now they're making an entirely new set of mistakes, as one reviewer emphatically points out.

Mirosoft's fundamental problem is that the market has evolved over the past several years. Initially, people bought MP3 players mostly to play their existing music collections, and there was no way to buy music online.

At the time, though, the music companies were desperate to find an online model which worked (shutting down Napster having done little to actually slow down the online sharing of copyrighted music), and they were willing to give Apple fairly reasonable terms for its iTunes Music Store: a fair, fixed price per track, and use restrictions which most consumers can live with.

Now that the iPod and iTunes Music Store have become such overwhelming successes, though, the music companies wish they had been a little more demanding. Since Apple has the vast majority of the market, no record label can dictate terms to Apple: Apple can cut them off from the increasingly lucrative online market, and hardly suffer at all.

Microsoft, on the other hand, desperately wants to break Apple's lock, and needs to have a well-stocked music store in order to gain a foothold for the Zune. The record companies desperately want to force Apple to accept different terms for the iTunes Music Store, which requires having a credible competitor.

So the record companies forced Microsoft to accept the kind of deal they wish they could get from Apple: absurdly onerous DRM (the Zune locks up public domain music), variable pricing (more popular music costs more), and even a cut of the revenue from Zune sales. They even forced Microsoft to cripple the Zune's one and only innovative feature, sharing music via WiFi.

Microsoft has made its own mistakes, too: the Zune utterly lacks the "cool factor" of the iPod, being too big and clunky-looking; and not providing Vista support on Day 1 is a gaffe of inexplicable magnitude.

Still, there's no small amount of irony that the music companies' desperation to force Microsoft to accept less-favorable terms for online music for the Zune will play a large part in dooming the effort to create a credible competitor for the iTunes Music Store.

It must be fun to be a record company executive and live in a world where you think consumers are so desperate for your product that they will buy it no matter what the price or no matter how absurd the terms.

Posted at 07:53 AM | Permalink |

Traffic Stats: Harder than they Look

Thursday - October 26, 2006 09:37 PM

Shorter zeFrank: "Hey, why is it that Rocketboom claims 10x the viewership that I have, but Alexa says I get more traffic?"

Shorter Andrew: "Reading an Apache log isn't rocket science, and my numbers are accurate."
As it happens, I know a thing or two about traffic measurement (though I'd hardly call myself an expert), and with all respect to Andrew, he's answering the wrong question.

Yes it is true that analyzing an Apache server log isn't that difficult--with some pitfalls--but that's not the issue. The issue isn't figuring out how much traffic you get, the issue is proving to a third party (i.e. advertisers) how much traffic you get when you have a financial incentive to inflate the numbers.

I don't want to suggest that Andrew is inflating his numbers--I have no way of knowing one way or the other--but this is a very real problem. It was a problem back with Web 1.0 when advertiser-supported websites were known to inflate their hits by various methods, including forging the log files (it isn't very hard, and almost impossible to detect).

So a lot of advertisers are rightly skeptical of self-reported traffic statistics from websites.

And I think zeFrank raises a legitimate question when he asks why RB claims ten times the viewers yet at least one independent traffic monitoring site suggests that zeFrank gets more. You can't just casually dismiss a 10x discrepancy like that by saying that Alexa's numbers are crap.

There's lots of reasons why this could happen:

1) Apples and Oranges

There's a gazillion different way to track web traffic: "hits," "visits," "bytes served," "downloads," etc. None of these are perfect, and definitions of them are not always consistent.

It appears that Rocketboom's traffic claim is, essentially, the number of times video files are served. Alexa's ranking is something else--I'm not sure exactly what.

Someone pointed out that on the RB website, every time someone loads a page it loads a video, whereas on zeFrank the videos only load when a visitor actually clicks on the video. zeFrank's behavior is more user-friendly, while RB's will substantially increase the number of video files served (I would guess by a factor of at least two, and likely more). This could account for a significant part of the discrepancy.

2) Sampling Bias (aka Alexa Sucks)

The group of people Alexa samples to gather its statistics is not representative of Internet users as a whole (but then, what is?). This could make a difference if RB and zeFrank attracted substantially different demographics. I don't know if they do, but I seriously doubt that Alexa's non-representative sample could account for an order-of-magnitude error. 25% might be believable, but I'd be more inclined to think that the two shows actually have very similar demographics.

More significantly, though, is the fact that Alexa only counts people who visit the site in their web browser. Alexa completely misses anyone who downloads through iTunes or an aggregator like Democracy, or something completely different like RB on TiVo. I don't know what fraction of the total this might be, but I suspect it is large.

What's more, since zeFrank has more interactive stuff on his website, zeFrank viewers are more likely to fire up their browser and go to the comments, etc. Rocketboom is more of an old-media (oops, sorry for the naughty word) model, where you view the video and that's about it. I would guess that people who subscribe to RB in iTunes (or other aggregator) are less likely to visit the website than zeFrank subscribers.

3) Network Caching/Web Proxies

It is not safe to assume that every person who views an online video got it from the host website.

Some ISPs (most notoriously AOL) will cache web pages on their internal network as a way to save bandwidth. When an AOL subscriber requests a page, the AOL proxy server will first check to see if it already has a copy--and if so, the subscriber gets it from the AOL proxy, and the request never hits the host site.

This is a very effective way to save bandwidth, but it can wreak havoc on server statistics. On my company's website, as much as 25% of our traffic comes from AOL, and on the static pages we never see it in our logs (but we know it's there because we still get requests for the dynamic web pages).

However, it is possible to configure your web server to instruct proxy servers (like AOL's) to not cache your page. If Andrew set up his server this way but zeFrank didn't, this could account for a large fraction of the discrepancy.

This works in zeFrank's favor: his viewership may actually be quite a bit larger than he thinks, because a lot of people might be viewing his videos through ISPs which do web caching.

4) Someone's Lying

I started this message by pointing out that the problem isn't figuring out how much traffic you've got. The real problem is being able to prove it to a third party.

Both Andrew and zeFrank are selling advertising on their videos, and so both have a financial incentive to make their traffic numbers look as big as possible. I'm not saying that anyone's being dishonest--I have no way to judge. But it has happened in the past, and there's no doubt it will happen again in the future. If all you're releasing is your own web log analysis, those numbers are automatically suspect if there's money involved. You want to have a trusted third party gathering the data.

It is also possible to be misleading without being out-and-out dishonest: for example, citing your peak daily traffic in a way which implies that it's your average traffic.

In summary: this is a much more complicated question than just whether everyone's being honest. There are many different ways to measure traffic, and some of the specific design and configuration decisions made by both zeFrank and RB can lead to wildly different numbers for their respective web traffic even if viewership is similar.

So zeFrank's challenge ("Why does RB claim so many more viewers than I get but Alexa says I'm ahead?") is reasonable. On the other hand, there are many likely reasons which don't imply any dishonesty on anyone's part.

And this is an issue which will continue to come up as long as there's money in this business.

Posted at 09:37 PM | Permalink |

Wifi Vulnerability?

Friday - August 04, 2006 10:15 AM

I've been thinking about the security vulnerability in WiFi network cards announced yesterday, and I'm starting to wonder if there's really all that much to this. The security hole was demonstrated on a MacBook with a third-party WiFi card installed, and was shown in a video tape rather than in a live demonstration. Very little additional information was given, other than that the bug was in the WiFi driver.
This strikes me as odder and odder the more I think about it.

To begin with, the hardware used is really unusual--probably less than one in ten thousand MacBooks will have a third party WiFi card installed, for the simple reason that all MacBooks come with built-in WiFi. The only reasons to have a third party card would be because the built-in WiFi broke (and the MacBook is a new machine, so they're all still under warranty), or because the user wants to connect to two networks simultaneously.

So it is reasonable to ask why they would choose to use a hardware combination so unusual as to be almost nonexistent in the real world. The given answer was that they chose Mac to tweak the image of Apple as a secure platform (the actual quote is somewhat more colorful), but used a third-party WiFi card because they didn't want to leave the impression that it was just an Apple problem. Huh? A finer example of pretzel-logic I have rarely seen.

In the absence of more details, it is entirely possible that this bizarre hardware was chosen precisely because it has a unique vulnerability which does not exist on more common platforms. This undermines the implication in the demonstration video that every WiFi equipped computer is vulnerable.

To put a finer point on it: if the flaw is in the WiFi device driver (as claimed), then every combination of WiFi hardware and OS will have, potentially, an entirely different set of vulnerabilities. Some drivers might be very bad, and others very good. It is possible that this hardware was chosen for the demo because it has a particularly bad driver, but that doesn't translate into a real-world problem since almost no real users would be using the buggy driver. (What's more, we shouldn't fault the manufacturer if they didn't rigorously test their software with hardware that nobody uses. As with anything else, there is a law of diminishing returns in testing.)

My next problem has to do with the way the demo was carried out: on video. The stated reason is that they didn't want anyone sniffing the WiFi connection and discovering the attack before it could be patched. That's an entirely reasonable concern. But it also conveniently avoids almost all independent scrutiny of the attack, and even the most basic questions about the level of the problem.

And that gets me to the third issue I have with this claim. The people responsible for the demo are spinning it as a major flaw affecting every WiFi equipped computer out there (and getting a lot of publicity as a result), but have given almost no information about what hardware and software might actually be vulnerable, and under what conditions. Some very basic questions have yet to be answered about the scope of the problem, such as:

* Is this a universal flaw, something unique to this oddball hardware combination, or a class of problems the severity of which can vary widely between configurations? For example, is the MacBook's built-in WiFi also vulnerable? What about PowerPC-based Macs? Or Windows laptops?

* This attack used (effectively) a malicious base station. What network states make a computer vulnerable? Is the computer only safe if the WiFi is actually turned off, or is it safe when connected to a trusted base station? What about when it is attached to an encrypted network? Or does it depend on the particular hardware? What about configuring the computer to only attach to trusted networks?

* Is firewall software an effective defense against this class of attacks, or not?

In sum, is this really the huge problem that the headlines imply, or just a case of the hardware company (Apple) failing to test its device drivers with a particular third-party add-on that nobody is likely to install? Inquiring minds want to know.

(And in the meanwhile, the practical advice--turn off the WiFi when you're not using it--still stands. As a bonus, turning off WiFi extends the battery life of the laptop.)

Posted at 10:15 AM | Permalink |

Wi-Fi Vulnerability

Thursday - August 03, 2006 10:03 AM

Apparently a lot of wi-fi drivers on laptops and personal computers have some serious security vulnerabilities which could allow anyone within wireless range to break into the machine.
Fortunately, there's an easy cure: turn off the wi-fi.

Even though there's no attack "in the wild" as yet, consider this a warning. Until the computer manufacturers issue updated drivers to fix the problems (which I don't expect will take long), be sure to turn off your wireless network unless you're actually using it.

Posted at 10:03 AM | Permalink |


Monday - April 17, 2006 05:26 PM

Om Malik wrote today on the virtues of simplicity in technology. Do one thing and do it well, rather than trying to cram every possible feature into the gizmo. Too many features ultimately confuse consumers and lead to long-term dissatisfaction. Think iPod, the Palm V, the original Blackberry, and Google before they started having Google-this and Google-that.
In technology, two powerful forces conspire to make products more complex than they need to be. One is marketing, since feature-checklist marketing demands that as soon as any competitor adds a new feature (no matter how absurd), then you need to add it too.

The other force is Wall Street, which demands that technology companies post exponentially increasing revenue growth even when a company's core market nears saturation. The only way out is to expand into other markets by creating goofy hybrid products like camera-phones, MP3-player-phones, smart-phones, game-player-phones, and other hyphenated-phones (none of which are very good at being phones).

All of which makes me harken back to the last mobile phone I truly loved. It was a Nokia 8860, about the size of a matchbox with a slide-out keypad cover and a chrome finish. It was an excellent phone, even though it lacked a color screen, camera, downloadable ringtones, or Palm OS (but it did have some lo-res black-and-white games). I dialed a number and talked to people; alternately, people would dial my number and I could answer or not as I chose.

Today I carry a Treo 650, considered by many the best smartphone on the market. I do appreciate being able to sync my address book to my computer (a useful capability the Nokia lacked). In addition, the Treo will take really crappy pictures, play MP3 files I load onto it with considerable difficulty, play lo-res games (but in color!), and sometimes place or receive phone calls. The latter capability only works intermittently, since the phone has a habit of turning itself off, randomly rebooting, or sometimes just not working for unknown reasons.

Ah, but now I only need to carry one gizmo around instead of two! It is true that I'm more conscientious about using my Palm organizer now that it's also my phone. But a Treo is as big or bigger than a Nokia 8860 glued to a Palm V, and I still own both an iPod and a digital camera (the music player and camera features of the Treo being so indescribably awful as to be useless).

The difference is that instead of owning a highly functional and stylish phone, a highly functional and stylish organizer, and a highly stylish and functional music player, I merely own a highly functional and stylish music player and a clunky and marginally functional organizer/phone. But both the Nokia 8860 and the Palm V have long since been discontinued and are no longer supported in the U.S.

Now that's progress.

Posted at 05:26 PM | Permalink |

"Social Software" Officially a Meaningless Term

Friday - March 10, 2006 10:36 AM

Technology buzzwords follow a well-travelled trajectory from Creation to Buzz to Hyped to Overused to Meaningless to Ignominious Death.
The term "Social Software" has now officially made it to the Meaningless stage. "Social Software" is a derivative of the already-dead buzzphrase "Social Networking Software" which went from Buzz to Ignominious Death almost before the ink on the VC term sheets was dry.

As evidence of the meaningless of "Social Software," I present the "Social Software Top 10" list. The common thread appears to be web sites built around user-generated content; no post-IPO companies need apply.

But if all you need to be "Social Software" is user-generated content, why isn't eBay on the list? Surely that's a company built entirely around the collective content (in the form of auctions) of its users. eBay's feedback system is arguably more sophisticated a social mechanism than anything on MySpace, if more formalized and limited to the context of online auctions.

Any why not Amazon.com, which has long included user-submitted reviews, and has a feedback and scoring mechanism.

For that matter, the World Wide Web itself is the ultimate Social Software, since the entire concept was built around web users creating web pages. And how about Google, which generates no content itself and relies on the links inside web pages to determine what pages are relevant to any given topic.

I look forward to the ignominious death of the Social Software buzzphrase.

Posted at 10:36 AM | Permalink |

HowTo: Set up a RAID mirror under OS X for the ultimate hard drive backup

Tuesday - February 21, 2006 09:20 PM

Introduction: Back in November I wrote about setting up a RAID array on my new Mac using a bunch of external hard drives I had sitting around. RAID stands for Redundant Array of Inexpensive Disks, and is a way to bind together multiple disks into one larger and/or more reliable virtual disk. RAID sounds complicated, but it is built in to Mac OS X, and isn't even that hard to set up.

One application in particular is extremely handy: setting up two (or more) hard drives to mirror each other to provide a constant backup. Known as a Mirrored RAID (or RAID 0 in geek-speak), this configuration keeps all the drives in the RAID array constantly backing each other up. If any drive fails, the other(s) take over immediately and seamlessly. In fact, you might not even know that your hard drive has failed, unless you happen to see the smoke.

Setting up a RAID array as your boot drive in Mac OS X isn't hard, but it does take some time. The payoff is worth it: you will be secure in the knowledge that you will never again lose any data to a hard drive failure.
Materials: You will need:

1) A Mac

2) An external (preferably FireWire) hard drive with at least as much capacity as your Mac's internal hard drive. This is the "mirror drive" (and you can have more than one for extra redundancy)

3) An external (FW or USB) hard drive with at least as much space available as you are currently using on your Mac. This will be a "scratch drive" you use for setting up the RAID, so you can borrow a drive from a friend and return it.

4) Your Mac OS X Installation Disk, either one that came with the computer or one you used to upgrade it.

To find out the capacity of your hard drive, click on the drive in the Finder and select "Get Info" (apple-I). If you're buying a new hard drive for your backup, get a drive with at least 8% more capacity as Finder reports as the capacity in the Get Info window--that's to allow for different ways of measuring hard drive capacity and formatting overhead. So if Finder says your hard drive has a capacity of 232 GB, get at least a 250 GB external drive. It's OK to get an external drive bigger than your existing drive--we can recover the extra space.

Overview: There are essentially three steps:

1) Copy the entire contents of your hard drive onto the scratch disk.

2) Set up the RAID array

3) Copy the contents of the hard drive from the scratch disk onto the RAID array.

Detailed Set-Up: Boot your Mac, insert the OS X Installation Disk, and double-click on "Install Mac OS X." You'll be asked to reboot your computer and probably enter your admin password.

Note: These instructions are for 10.4; earlier versions may have some slight differences.

The computer will boot off the OS X Installation Disk. This is necessary because you're going to completely rebuild the hard drive, which means that the hard drive can't be the boot disk.

You will be asked to select your language, then you will be at the screen to begin installing OS X. That's not what you want, though: you want to launch Disk Utility.

In the 10.4 installer, the Disk Utility is found under the "Utilities" menu. Choose it, and Disk Utility should launch.

Plug in your scratch disk (if you haven't already). It should appear in the bar on the left-hand side of the window, along with the Macintosh HD (your internal hard drive) and the OS X Installation Disk.

Select your Macintosh HD in the bar on the left-hand side of the window.

From the File menu, choose "New...Disk image from Macintosh HD". Save the new disk image somewhere on the scratch disk.

Your computer will now begin copying the entire contents of your internal hard drive into a file on the scratch disk. Depending on how much stuff you have saved, this could take a while; perhaps a few hours.

Once the new disk image is complete, plug in the mirror drive (if you haven't already). It should appear in the bar on the left-hand side of the window.

If your mirror drive is bigger than your internal hard drive: We want to recover this extra space to be usable, so select the mirror drive, then choose "Partition" in the area in the right-hand side of the window. Set up two partitions, and move the bar between them until the first partition is the same size (or slightly larger) than the size of your internal hard drive. Name the first partition "Mirror" and the second partition "Extra Space."

Now we're ready to configure the RAID array. Select your internal hard drive, and choose "RAID" on the right-hand side of the window. Create a new RAID Set named Macintosh HD, with volume format "Mac OS Extended (Journaled)" and RAID Type "Mirrored RAID set". Drag the mirror drive into the raid set (or the Mirror partition of the mirror drive, if you set up two partitions).

Click "Options" and make sure "RAID Mirror AutoRebuild" is checked.

Finally, click "Create" to create the RAID set.

Congratulations! You've set up the RAID! Now all you need to do is copy your hard drive back onto the new RAID set and you're done.

Click "Restore" in the right-hand side of the window. For "Source" drag the image of the internal hard drive you created earlier. For Destination, drag your newly created RAID set. Select "Erase Destination" and "Restore."

Your computer will now copy your hard drive back onto the RAID set. This may take a few hours.

When it's done, you're done. Quit Disk Utility, then go to Startup Disk (in 10.4, this is in the Utilities menu) and choose Macintosh HD (your new RAID set) and click Restart...

Done: You now have a RAID mirror of your internal hard drive. Any time both drives are connected and functional, the computer will keep them both synchronized. If one fails (or is unplugged), the computer will keep working on the remaining drive(s). If a drive is reconnected to the array, the computer will update it to the right state.

To verify this, once you've rebooted your computer you can open Disk Utility and choose your Macintosh HD, then select "RAID" from the panel on the right hand side. It should show all the disks in the array, along with the status of each. If you unplug the external drive, it will change the array status to "Degraded" and show the external drive as "offline." Plug it back in, and it should show the external drive as "rebuilding."

Posted at 09:20 PM | Permalink |

NSA Spying

Tuesday - January 17, 2006 04:00 PM

Unless you've been living in an igloo the past several weeks, by now you've heard that the National Security Agency (NSA) has apparently been conducting surveillance of American citizens without a warrant, in apparent violation of FISA, the post-Nixon law which regulates such activities.
Ignoring for the moment the question of whether this activity is legal (I doubt it is) or advisable (they could have asked Congress for permission), I'd like to think about what exactly the NSA might have been doing. I don't have a security clearance and have never (to my knowledge) come into possession of any classified information, so this is purely a thought-experiment.

(And if I happen to be right about a few things, please re-read the part about this being a thought experiment before you come knocking on my door, Mr. FBI Agent.)

Interception Capabilities
It is fair to assume that the NSA has the technical capability to eavesdrop on any international communications anywhere in the world. Any phone call, e-mail, web connection, fax transmission, etc. that crosses an international border can probably be intercepted if there's enough interest (and budget) to do so. All long-distance electronic communication travels in one of three media: copper, fiber optic, and wireless. All three media are interceptable, and the "Echelon" program was probably created for the express purpose of intercepting nearly all international communications.

Old-fashioned copper cables are easy to tap. All you need is an induction coil. Wireless is similarly easy: an antenna in the right place will do the job (and the NSA has been known to rent offices which happen to be in useful locations for intercepting signals from, say, the Russian embassy).

For a long time, many people assumed that fiber optic cables were secure, but that's not true of course. Optical fibers will "leak" light if they are bent at a radius which is just a little too tight, making it possible to tap a fiber without disrupting the signal or cutting the cable. It has been reported for many years that the U.S. operates one or more submarines built just for the purpose of tapping undersea optical cables.

The biggest challenge is probably free-space optics: using a laser to send signals through the air. But free-space optical systems have a limited range because of atmospheric problems (dust, birds, etc.), and so aren't used very often. But even free-space optics can be intercepted (with difficulty) because the laser will scatter off dust in the atmosphere, and a well-placed receiver may be able to pick out the signal.

The NSA probably also has the capability to intercept some in-country communications in foreign countries of interest. Places like Russia, Iran, and the U.K. It's much harder to be comprehensive with in-country communications, since the fewer the number of hops, the fewer places to tap the signal. International communications also tend to be funneled along a limited number of international trunks, while the in-country networks are more meshed. Finally, international transmissions are much more likely to be interesting to spies, so there isn't as much as incentive to intercept purely domestic communications. So the NSA's capability for in-country interception is probably focused on useful and/or easy targets like government agencies and mobile phone networks.

Finally, the NSA probably has one big hole in its technical capabilities: there is probably little or no ability to intercept purely domestic communications in the U.S. This is because wiretapping inside the U.S. is outside the NSA's traditional mission, so there would be no need to develop the capability (the FBI, on the other hand, can and does wiretap calls inside the U.S., but the FBI has a very different mission and structure than the NSA).

So, to summarize: the NSA probably intercepts a very high percentage of communications which crosses international borders, and a lesser percentage of in-country communications in foreign countries, likely focused on mobile phone networks, government agencies, and other groups or individuals we have a specific interest in. There's probably very little interception of U.S. domestic communications, except to the extent that it happens to cross international borders (this blog, for example, is actually hosted in Canada; and for a time many domestic long distance calls in the U.S. were being routed through Canada).

What to Do With the Data?
It is absurd to think that the NSA has the ability to actively monitor all the communications it probably intercepts. There's simply too much data, and not enough people to analyze it all.

But you can learn a lot without knowing anything at all about what a message says, just who the sender and recipient are (aka Traffic Analysis). Every time one person calls another person, or sends an e-mail or fax, or visits a web site, you can infer a relationship between the two people or organizations involved. With enough data and time, you can construct the social network of everyone whose communications you're tracking.

When you consider that the NSA is likely doing traffic analysis on a significant fraction of the electronic communications outside the U.S., that means that they have likely built up a database of the social networks of a big chunk of the developed world's population. This is an extremely useful thing to have if you're in the espionage business, since it lets you immediately connect a suspected spy or terrorist to all his or her friends and associates. If you know two or three members of a terrorist cell, you can probably identify the rest of the cell simply by looking for common associates.

Going beyond simple traffic analysis, there have been persistent rumors in the speech recognition community that the NSA has an unusual interest in the technology. The civilian state of the art (which probably isn't so different from the classified technology) allows you to do fairly powerful word spotting in multiple languages, but accurate machine transcription of a phone call is still difficult. Speech recognition is computationally intensive, so it probably isn't used on all phone calls, but on a subset of calls which excludes obviously uninteresting ones. It's not clear how useful this actually is, since (presumably) most criminals, spies, and terrorists know enough to speak in code over the phone. Simple word substitution like substituting "double cheeseburger" for "bomb" will foil automated speech recognition since computers aren't smart enough to know that plotting to hide a double cheeseburger on an airplane makes no sense.

It also makes sense for the NSA to keep archives of communications which might be of future interest. The main limitation is storage space, and a few billion dollars a year will buy you one heck of a disk array. The advantage is that if you identify new enemies (for example, when they fly airplanes into office towers) an archive will let you go back and trace how the plot took shape, who else might have been involved, and what other plots had been considered and might still be active. So there is probably a giant disk array somewhere which holds a significant fraction of the world's international communications.

All this technology (amazing as it is) can't really do anything more than identify potentially interesting relationships, where "interesting" is defined by the people running the system. If you're only looking for islamic bomb plots, you're not likely to catch the Japanese naval attack on Pearl Harbor. And vice-versa.

But the real bane of any large scale data mining operation is the number of false positives you generate: things that look relevant to the computer, but are in fact completely useless. Software designed to search for a needle in a haystack will inevitably supply far more needle-like bits of hay than actual needles. An article in today's New York Times suggests that the value of the data the NSA was able to supply to the FBI was fairly limited, being names, e-mail addresses, or phone numbers of people with connections to suspected terrorists. Since the NSA identified thousands of "leads" and often couldn't provide any context (because such information would be highly classified), there wasn't much the FBI could do.

Posted at 04:00 PM | Permalink |

Laptop Lust

Wednesday - January 11, 2006 08:29 AM

This is terrible. Every time Apple introduces a new laptop, I want to go out and replace mine right away.
So, no surprise, I have a case of laptop lust for the new dual-core "MacBook" laptops announced yesterday. Even though it's a first-generation design and almost guaranteed to be buggy. Even though it only has a 15" screen, not the 17" monster I currently use. Even though the name, well, let's just not discuss the name, OK?

Hi, my name is Shivering Timbers, and I'm a Mac addict.

Right now I'm using a 17" PowerBook G4, and I love it. I won't be replacing it any time soon. But that doesn't mean I can't have a little crush on the sexy new machine.

Posted at 08:29 AM | Permalink |

Why does technology suck?

Tuesday - January 03, 2006 08:30 PM

I've noticed something. Technology sucks.
Let me be a little more specific. Technology, and in particular the giant systems which big companies buy, usually doesn't work properly, if at all. In comparison, home technology (for all its shortcomings) is extremely robust, stable, and usable.

And I'm not just talking about the well-known limitations of certain Microsoft products. I'm talking about everything in IT, from databases to web sites to Three Letter Acronym implementations (CRM, SFA, SCM, etc.). If a company spent more than a million dollars on it in the past twenty years, it probably sucks.

Maybe I'm painting things with too-broad of a brush. But here are some examples I've observed in my travels:

* A client of mine recently migrated one of its mission-critical systems to a new technology. Lots of new capability, but it sucks. It crashes weekly, and has been down for hours and even once for days at a time.

* A consulting firm I know once privately described its business model to me as "cleaning up after Accenture." This firm employed about 150 programmers, and the majority of its business was essentially picking up the pieces of big failed projects and trying to get something to work.

* Customer Relationship Management software sucks so badly that googling "Why do CRM projects fail" returns over a million hits, and industry publications regularly run articles on the topic. Those are the same publications which are generally in the business of promoting new technologies. Yet companies continue to launch CRM projects. In fact, at my prior job, I lived through two failed CRM projects in the space of less than five years.

Some consultants claim that over half of all big technology projects fail at one level or another, and that nearly all never meet all the initial objectives. Companies would never accept this dismal level of performance from other major capital expenditures--imagine if there was only a 10% chance that a new office tower would function as designed, and less than a 50% chance that it would even remain standing after being completed. But in the world of IT, where technology sucks, this is the norm.

There are lots of reasons why technology sucks, and I don't know which are most important. Chances are that it's a combination of factors:

* Too much focus on short-term gains (both to the vendor and project ROI) at the expense of building robust and stable systems.

* Overpromising the capabilities of new technology.

* A system of software vendors (Oracle and its ilk) and systems integrators (Accenture and its ilk) which makes it easy to blame someone else for failure, and encourages drawing out failing projects to the bitter end.

* Price-sensitive bidding which discourages extra expenses like design and testing.

* System integration is inherently expensive and time-consuming, and most enterprise projects require more system integration than anything else.

* Customers prefer flashy (but only marginally useful) features over systems which are simple and robust.

* Salespeople have a hard time saying "no" when the customer is dangling a multi-million dollar project, so dumb ideas don't get quashed early on when they should be.

I could write an entire article about each of these. But not tonight.

Posted at 08:30 PM | Permalink |

CO2 Sequestering

Monday - December 12, 2005 02:18 PM

There's an article in Wired today about a technology for sequestering carbon dioxide underground. The idea of pumping CO2 underground as a way to reduce greenhouse gasses is not new; people have been talking about it for years.
The disadvantage of pumping CO2 is that the stuff is still a gas (or liquid) under very high pressure deep underground, and there's no guarantee that it won't escape someday. In fact, pockets of CO2 naturally exist in some rock formations, and it does escape from time to time.

What's new is the discovery that if you pump the CO2 into a basalt formation, then the CO2 chemically reacts with the rock and forms calcium carbonate (that is, chalk) fairly quickly: within 18 months. Once the chemical reaction happens, the CO2 can't escape, since chalk is solid and chemically stable. This is a permanent disposal method, assuming it works as promised.

As it happens, there's lots and lots of basalt in the world. Basalt is the rock which comes out of certain types of volcanoes. It forms much of the ocean floor, the entire Hawaiian islands, and bedrock in many parts of the world.

Assuming that this technology works (and it isn't too expensive), there's no reason why power plants in many parts of the world couldn't simply inject their waste gasses right into deep wells located on-site. This could provide the best possible energy solution: a plant which burns coal (the cheapest and most plentiful fuel around), but has zero emissions as everything which would normally go up the smokestack goes into an injection well instead.

Posted at 02:18 PM | Permalink |

Powered By iBlog, Comments By HaloScan
RSS Feed