After my previous post on hosting, here are some quick notes on some other essential bits of online technology and favored/hated vendors, from the perspective of a small business looking for solid but very cost-effective solutions. I tend to pick vendors really good at one thing, rather than those mediocre across a broad range of services. Read More
There’s sound, healthy skepticism about crazy conspiracy theories (“George Bush paid Mossad to blow up the Pentagon with a missile and make believe it was a plane”). Then there’s plain stupidity. People dismissing speculation about what caused the backbone cable cuts leading to massive outages in the Middle East and India as “tinfoil hat thinking” are massively deluded about how nation states behave. (Fun way to know for sure it happened in the first place: a former World of Warcraft guildmate located in Bahrain told us he had huge latency for days). I have no idea who’s behind these cuts, but there’s just very little chance it’s a bunch of unrelated accidents. The most ludicrous theory is that it was just the unfortunate result of trailing anchors.
Take this from someone two degrees from the people who sunk the Rainbow Warrior: this stuff does happen outside of movies. My father in law is an engineer at the DGA defense procurement agency while my father is a retired Army officer. What we were told when we grew up was: “I’m not telling you details about what I know and what I’m working on so that bad people can’t get information out of you.” It shapes and informs your view of the world! Part of it was specific to the Cold War, but terrorist threats were also hanging over Western Europe since the 70’s in various shapes and forms.
It’s funny to see people thinking they’re being all smart and educated and rational actually demonstrating one thing: they don’t know jack about what they’re talking about and are very, very naive. Yes, people are spouting all sorts of nonsense on the internet. Yes, you want to ignore most of it as idle speculation or even outright stupidity. No, it doesn’t mean everything is fine and dandy out there.
Bottom line is, after massive virus outbreaks these past few years, attacks against Estonia last year and now this, we think the chances that a massive Internet slowdown lasting weeks might happen in the next five years is not insignificant. I don’t want to pull a Bob Metcalfe on you, but we’re looking at what parts of our business may be made resilient to such an event. It may come from states, terrorists, organized crime, bored teens, or a combination of the above. If you’re managing servers I’m sure you’re routinely getting pounded by DDOS and scraping and all sorts of crazy behavior just like we are. This ain’t fun.
The Internet is designed for resilience, but if you look at backbone maps, there are failure points and the liability is there that the whole thing is made barely functional for significant lengths of time. You don’t even need to blow all interconnection points. Once you’ve removed some of them, the rest can slow down to a crawl through bottleneck effects. In theory you’re still connected. In practice all you get is time outs, you can forget about using web apps, let alone VOIP or streaming video. Maybe you can get load a 30kb web page once in a while. Not the end of the world, but a tough ride if 100% of your income is based on the assumption of smooth, fast, always-on broadband everywhere. The fact companies such as Google have their eyes on the backbone may in part be a hedge against such doom scenarios.
I had missed the tempest in a teapot between Jot and SocialText, but whil catching up with these old news I found this mouthful from Jonathan D. Nolen:
"What it comes down to for me as a customer, really, is this: I don’t just want open source code. I want a partnership with an open company. Open source code and open data are just a minimum bar. You also have to provide channels of communication — and participate in them. And you have to be honest about your product: both its problems and its future directions. In the end, the open relationship with your customers will prove far more valuable (as well as more remunerative) than open source code alone."
Some open source projects are living dead while some closed source software vendors managed to create thriving communities around open access to data. I know which one is more important to me as a customer.
02/27/05 update: The Open Company Test.
As recently as 18 months ago it seemed there wasn’t much to handle software product development except either bug databases, generic wikis and intranet platforms, or maybe expensive and (for my purposes) cumbersome enterprise software, which is not what I’m interested about.
Now that I’m looking again at this though (we need something to keep our growing team in synch at Soflow), there are at least three new applications meant to address the need for the right mix of structure and flexibility:
Atlassian provides both issue tracking and collaboration but I’m looking for a single tool that seamlessly supports the whole product management and development process, from business case to user requirements to functional specs to tracking development progress, tests and debugging. Borland has CaliberRM but it needs its own server (the fact it comes both with a web and a desktop client is attractive though) and is not exactly cheap. But if I look in that direction there are plenty of requirement management tools.
Here are my notes so far:
Chad Dickerson and his Infoworld readers come with this list which is a bit uneven but includes common sense tips:
1. Botching your outsourcing strategy
2. Dismissing open source — or bowing before it
3. Offshoring with blinders on
4. Discounting internal security threats
5. Failing to secure a fluid perimeter
6. Ignoring security for handhelds
7. Promoting the wrong people
8. Mishandling change management
9. Mismanaging software development
10. Letting engineers do their own QA
11. Developing Web apps for IE only
12. Relying on a single network performance
13. Throwing bandwidth at a network problem
14. Permitting weak passwords
15. Never sweating the small stuff
16. Clinging to prior solutions
17. Falling behind on emerging technologies
18. Underestimating PHP
19. Violating the KISS principle
20. Being a slave to vendor marketing strategies
Two reminders that the basics of ecommerce are still quite brittle, especially when payment and identity verification are involved. Paypal still wants to be your pal so make sure to do transactions today to get the usual fees waived.
This long interview with David Neeleman, JetBlue Airways’ founder and CEO, is a refreshing must-read and leaves me wanting to know more about the guy and his company. Here’s what a couple queries fetch:
- Neeleman on outsourcing and call centers
- Inc article from March that got lots of linkage
- BusinessWeek article focused on the challenges faced by both the individual and the company
- CIO.com article about Jetblue’s IT
- Baseline and CNet articles about Jetblue’s focus on using Microsoft products
12/15/05 update: The Steady, Strategic Ascent of JetBlue Airways.
"Wall Street firms became motivated buyers of surplus data centers from bankrupt telcos and web hosts. Chapter 11 filings by WorldCom, Exodus, Metromedia Fiber Network and Global Crossing flooded the market with surplus data centers and telecom assets. Financial firms that bought or leased data centers outside New York in the past two years include the Bank of New York, Wachovia, Deutsche Bank, MBNA Corp., New York Life, MasterCard and Goldman Sachs. Dallas, Kansas City and St. Louis became the hottest markets for mission-critical facilities. […]
One challenge was real-time mirroring technology, which historically has limited the distance between primary and secondary data centers to about 60 miles. Software advances have helped overcome these distance limitations, improving the speed of mirroring technology and recovery times."
Eweek has a long interview with Marty Abbott, SVP of technology at eBay, who walks us through their online operations. Among other things, he explains how the backend databases evolved over the last five years:
"We went from one huge back-end system and four or five very large search databases. Search used to update in 6 to 12 hours from the time frame in which someone would place a bid or an item for sale. Today, updates are usually less than 90 seconds. The front end in October ’99 was a two-tiered system with [Microsoft Corp.] IIS [Internet Information Services] and ISAPI [Internet Server API]. The front ends were about 60 [Windows] NT servers. Fast-forward to today. We have 200 back-end databases, all of them in the 6- to 12-processor range, as opposed to having tens of processors before. Not all those are necessary to run the site. We have that many for disaster recovery purposes and for data replication."