My Windows 11 Journey

Barry Briggs

Now, before I begin, let me say that first, this is not a rant about Microsoft, Windows, or anything. I’ve used Microsoft operating systems since DOS 1.0 and worked for the Redmond company for nearly 14 years. I like Windows and love Office and Visual Studio.

I work at home these days. As such I possess a variety of devices including my primary workstation, an Alienware i9 beast. I purchased this “Area51” model as they call it a few years back not so much because it targets gamers – I’m not a gamer –  but because I’ve learned the hard way that the high end machine of today is just average tomorrow. (I had an i5-based Mac which became obsolete so fast it made my head spin.)

Anyway.

To Upgrade or Not to Upgrade

In October 2021, Microsoft released Windows 11 to largely tepid reviews. Few people cared, partly because it included very few, if any, compelling new features and – for some, worse – had strict hardware requirements that many machines could not satisfy.

My beast did, in fact, meet the requirements. But it being my production machine, that I paid for, that I support, that I need every single day, well, let’s say even though I’m a pretty high-tech guy I’m pretty conservative about “upgrades.”

Time went on and I was happy with Windows 10.

But over time the Redmond folks began to rumble about dropping support for Windows 10 and while that’s probably a long way away, I began to consider upgrading.

Yeah, well…

My Rig

Before I go on, it’s worth just mentioning my machine’s (the cool kids call it a “rig”) setup. As I mentioned, it’s an i9 with 12 cores. It also sports some 64GB of RAM. For storage – and this is crucial – it’s got a 256GB SSD for the C: drive, and well over 16TB on internal and external hard drives.

Why such a “small” C: drive? (I put it in quotes because I’m an old guy and I remember when having a loud, power-hungry, full-high 100 megabyte drive in your PC exemplified the state of the art. I still have that drive in my garage.)

Anyway, here’s why:

  • Users – especially gamers, whom these sorts of high-end machines target – want really fast boot, which SSDs give you.
  • But byte for byte SSDs are super-expensive compared to hard disk drives.
  • So they tend to be smallish, capacity-wise.

In principle it wasn’t that horrible an idea. Put all the system stuff on SSD-drive C: and your “data” on another, slower drive (usually D:).

But Windows never understood how to take advantage of such a (very common) configuration. Instead by default it puts everything – applications, data, Word files, Excel spreadsheets – in various locations on the C: drive. More on that later.

Early on I recognized my skinny C:’s limits and cleverly moved my /Documents folder to a larger hard-disk drive. I figured – rightly – that I would never notice if my Word documents or Excel spreadsheets took 600 instead of 300 milliseconds to load. And that would make usage of my C: drive much more economical and I’d always have room for whatever.

Wrong. As we’ll see.

First Attempts

It had been a year since Windows 11’s release and all my friends and all the cool kids were using it. Maybe, I thought, it’s time.

And Microsoft wanted me to. Really wanted me to. Every now and then when I turned on my computer in the morning it would display an incredibly annoying, full screen nag urging me to upgrade, with “Keep Windows 10” in very small print at the bottom.

Which I always did.

Until…

One day I succumbed.

Not Enough Room

Over time my poor beast has accumulated many, many apps. I’m a developer, so I have Visual Studio with all the fixings. Office, of course. Python. Visual Studio Code. Windows Subsystem for Linux, for a time; Docker and containers.  I’m a writer too, so Grammarly, Final Draft, Kindle tools.

So on my first attempt – around about nine months ago – the upgrade process ran for thirty minutes or then told me I didn’t have enough room on my C: drive. It offered a bunch of pretty useless suggestions about cleaning the drive – yes, empty the recycle bin, clear all temp files, blah, blah. (I could of course compress the C: drive but the compression/decompression overhead would negate the whole point of an SSD.)

Having work to do, I gave up for a while.

Eventually I tried again. Realizing I was nowhere close, I took some radical steps. I uninstalled Visual Studio, a big disk hog, and reinstalled it on my F: drive…good idea, but you can’t install all of VS to another drive, just the IDE. All the libraries, assemblies, and a lot of other stuff still winds up on C:.

Only 4GB available.

Not enough.

WTF Is Taking Up All the Space?

Around about now I began to wonder what the heck was actually using 252GB? It couldn’t just be Visual Studio.

Using Windows Explorer, sorting by size, turned up nothing. I’d long ago moved or deleted those big home movie MP4s and such. No – I learned it wasn’t about a few big files, but rather about the vast quantities of files. Zillions of them, a few megs here, a few there.

But where exactly? Somewhere, there must be folders with huge numbers of files occupying huge amounts of space.

Now, anybody who’s done a dir/s of the C: drive knows that even on a brand-new machine there are thousands and thousands of folders. Figuring out what’s where is not something you can just eyeball.

And to my shock, Windows does not have a utility that shows folder sizes – i.e., one that sorts folders – not files — by size. It doesn’t even have an API (in .NET or Win32) to return folder size!

Time to get serious.

Time to write an app.

So, dirsort is a highly inefficient, slow, but functional C# application that recursively goes through folders, adds up file sizes, and places the summed sizes in a sorted list. [Edit: my friend Miha tells me there’s an open-source app called Windirstat that does this. Who knew?] [Edit to the edit: And Wiztree…]

I ran it on my C: drive.

Good lord.

What a Freaking Mess!

What I found on my C: drive absolutely astounded me. I mean, speechless.

Here are just some of the things I discovered.

AppData

There’s a folder called C:\Users\[YourNameHere]\AppData. It’s hidden, by the way. That’s right, hidden. All told it contained about 80GB of files.

Theory has it that \AppData is just for temporary application data, as the name suggests. It’s got three subfolders: \Local, \LocalLow, and \Roaming. Local is data just for your PC, Roaming is data (like bookmarks) that can travel to any PC you log on too.

Yeah, all that’s pretty much BS in practice.

Applications install themselves to AppData. Applications like Visual Studio Code – from Microsoft!

There is an \AppData\Local\Temp which they say you can safely clear. For me it held about 8GB, all of which I did delete. These were temporary files that in some cases dated from the late Jurassic. Not temporary.

There’s all kinds of other crap in \AppData – who knows what it’s all for. And it occupies nearly a third of my C: drive.

Did I mention it’s hidden?

Bad, Bad Developers

Look, developers. When you’re done with a temporary file, delete it, goddammit. When you’re done with your installer, delete it, goddammit. Delete your year-old app crash dumps, goddammit.

All over the C: drive I found clutter. INI files, DMP files, files that are named with a GUID (fer cryin’ out loud you idiots!).

Clean up your messes!

A Light at the End of the Tunnel

And then I found these two files at the root of the C: drive: swapfile.sys and hiberfil.sys. The swap file – ok, we all know what that is. Don’t touch it.

Now, my app found these but I couldn’t see them in File Explorer or cmd. It turns out they are system files (not hidden files which are of course different).

But what is hiberfil.sys? More searching…and…

It’s the file where Windows saves the contents of memory when it hibernates.  (“Hibernation” is different from “sleep” and don’t ask me what the difference is.) On my system the file occupied 26GB of disk – awesome, if I can get rid of it I can finally upgrade! Woo-hoo!

So: can I relocate it to another drive? Again – if it takes three seconds to un-hibernate rather than one I really don’t care.

No. For some very strange reason you can move the swapfile to another drive – don’t even think about it! – but not the hibernation file.

Why? Who knows?

However, you can safely delete hiberfil.sys! Yes! Go into the cmd prompt in administrator mode and type powercfg -h off which turns off all the hibernation machinery. Then reboot.

Wow!

Almost 30GB back!

And…I could install Windows 11!

Dear Microsoft

Yes, it’s all about cloud and AI these days, I get it. Dear Microsoft, please don’t forget about Windows.

I have so many recommendations. Here they are in no particular order:

  • Please rationalize where you put apps. There’s \Program Files, \Program Files (x86), three folders in \AppData, \WindowsApps, God knows where else. It’s ridiculous.
  • While you’re at it, Get Rid of \AppData. It’s a mess. A huge one.
  • Every few months start a background, low-priority job to clear out .tmp, .dmp, and other such crap files that are over a year old.
  • When imaging a new system, if it has a small-ish C: drive and big D: drive, put non-performance-sensitive stuff on the D: drive by default.
  • Please build a utility that displays a sorted list of folder sizes we know where all this crap is.
  • Enable hiberfil.sys to be placed on an alternate drive.
  • Talk to your developers, ISVs, MVPs, etc., and build a “well-behaved application” checklist that includes best practices, like not using \AppData. Enforce it.
  • It should be possible to install any application in its entirety (looking at you, Visual Studio) on any drive.
  • How about some better tools to deep-clean a C: drive instead of just suggesting emptying the recycle bin?  

Lastly, dear Microsoft, I remind you I had to write an app in order to get Windows 11 to install. An app. I had to write one. To get Windows 11 to install.

Were I not a coder, the only way I could get Windows 11 would be…to buy a new machine!

Was It Worth It?

I’m only a couple of hours into Windows 11. I managed to successfully move the task bar back to the left so it’s back to looking like Windows 10. I suspect everybody does this. [Update after a day: I like the tabbed cmd.exe interface. I don’t like that Windows 11 lost my desktop images. And — worst of all but probably not surprising — I wanted to turn hibernation back on and — you guessed it — “There is not enough space on the disk.” Aaaargghhh!]

Otherwise I really haven’t noticed any difference. (I sure would like a Time Machine-like backup app, though, instead of backing up to OneDrive. But that’s a story for another day.)

I’m one of the cool kids now, I guess.

Fixing Alexa

The news has it that Amazon is laying off “thousands” of workers, many of them in the Alexa group.

That’s too bad, but probably not surprising. As an outsider, I view Alexa as a technology with an identity crisis — it tries to do many, many things and does none of them particularly well.

Don’t get me wrong. I love Alexa — I have an Alexa Show in (almost) every room in the house. But as useful and (occasionally) fun as it is, it can also be incredibly annoying.

Here’s my recipe for fixing it:

  1. Forget about Alexa “helping” Amazon. I won’t ever buy anything through Alexa. Forget it. Alexa is not a supporting character in the Amazon universe: it’s not a new “channel”; it’s a star in its own right. Stop advertising.
  2. Forget about “monetizing” Alexa. Forget it! Stop wasting time and build stuff I’ll get a kick out of. Make your money from the sale of the devices.
  3. Embrace what Alexa is used for. All of our Alexa Shows are primarily used as digital picture frames connected to Amazon Photos. Yeah, and the weather screen is helpful too. Oh, yeah, the timer app is helpful in the kitchen.
  4. Embrace what Alexa could be used for. The most exciting use case for Alexa is driving home automation. Make it work seamlessly with Blink and all the other gadgets (and, by the way, how about some really high-end home security products? 24×7 video monitoring, etc.). Build in all the home automation protocols — Zigbee, etc. Interoperate with Apple and Google devices — be the first!
  5. Give me management. I have a fleet of Alexas — I want to manage them all from one place (preferably my PC where I have lots of real estate, and absolutely positively NOT my phone where I can barely read the Alexa app’s tiny font!). I want to be able to set preferences and settings for all the Alexas in my home at a stroke. While you’re at it, give us an API that can be used for more than just skills development.
  6. Stop being annoying. Stop showing me yesterday’s news. Stop asking me if I have the flu.
  7. While you’re at it, fix the Photos app. It’s really terrible — it’s slow, it has memory leaks, and does stupid stuff (like it uploads HEICs but you can’t see them on the web or on Alexa). There’s a real opportunity for a great cloud photos app which Alexa could leverage: do it!

That’s for starters. I have a few thousand other ideas but the main thing here is focus. Alexa should be about usefulness in the home, not about selling me more stuff or advancing the Amazon brand.

The Rise and Fall of Lotus eSuite

By Barry Briggs

[This is a draft based on my recollections. I’m sure it’s not complete or even 100% correct; I hope that others who were involved can supplement with their memories which I will fold in. Drop a comment or a DM on Facebook or Twitter @barrybriggs!]

In 1997, Lotus Development, an incredibly innovative software firm that had previously created Lotus 1-2-3, for a time the most popular software application on the planet, and Lotus Notes, for a time the most widely used email and collaboration application, released a set of Java applets called eSuite.

You could say a lot of things about Lotus eSuite: it was, well, very cool, way (way) ahead of its time, and for a very brief period of time had the opportunity of dethroning Microsoft Office from its dominant position. Really. Well, maybe.

But it didn’t.

What went right? What went wrong?

Here is my perspective. Why do I have anything to say about it? Well, I was intimately involved with eSuite. You might even say I invented it.

Java and Platform Independence

In the bad old days of application development, you wrote an app in a language like C or C++ (or even assembler) which compiled/assembled to machine code. That code could only be executed by the specific type of processor in the box, like an Intel 80386. Moreover, your code had to interact with its environment — say, Windows — which meant it had to call upon system services, like displaying a button or a dialog box.

If you wanted to run your code on a different architecture, say a Motorola 68000-based Mac, you had to make massive changes to the source, because not all C compilers were alike, and because the underlying services offered by Windows and Mac were quite different. You coded a button on Windows very differently from one on MacOS or X-Windows. Hence at Lotus we had separate, large teams for Windows, OS/2, and Mac versions of the same product. (In fact, we were occasionally criticized for having spreadsheet products that looked like they came from different companies: the Mac, OS/2, and Windows versions of 1-2-3, built to conform to those platforms’ user interface standards, did look very different.)

Back to our story.

In 1995, Sun Microsystems released the first version of their new high-level programming language, Java. As the first language to compile to byte codes, instead of machine code, it had huge promise because, the theory went, you could “write once, run everywhere.” In other words, each platform – Windows, Mac, Sun, Unix (Linux was still nascent) – would have a runtime which could translate the byte codes to executable code appropriate for that device.

Perhaps even better, Java’s libraries (called the AWT, or Abstract Window Toolkit) also “abstracted” (wrapped) the underlying operating system services with a common API. The AWT’s function to create a button created a Windows button on Windows, a Mac button on MacOS, and so on.

Cool! So why was this more than just a neat technical achievement?

At the time, Microsoft largely dominated personal computing, and its competitors, principally Lotus and Sun, faced existential threats from the Redmond giant. (I’m not going to spend much time talking about how Microsoft achieved this position. There are many varied opinions. My own view, having worked at both Lotus and Microsoft, and thus having seen both companies from the inside, is that Microsoft simply outcompeted the others.)

In any event, many saw Java as a godsend, having the potential to release the industry from Microsoft’s stranglehold. In theory, you could write an application and it could run on anything you like. So who needed Windows? Office?

Browsers

Even cooler, Marc Andreesen’s Netscape Navigator introduced a Java runtime into version 2 of their browser, which at the time pretty much owned the marketplace. Microsoft’s Internet Explorer followed with Java support shortly thereafter.

Everybody at the time recognized that browser-based computing was going to be terribly significant, but web-based applications – especially dynamic, interactive user interfaces in the browser – were primitive (and ugly!) at best. HTML, was both very limited and extremely fluid at the time; the W3C had only been founded in 1994 and in any event the value of web standards had yet to be recognized. Browser developers, seeking to gain advantage, all created their own tags more or less willy-nilly. A very primitive form of JavaScript (confusingly, not at all the same as Java) was also introduced at this time but it couldn’t do much. And the beautiful renderings that CSS makes possible still lay in the future.

Anyway, Netscape and IE introduced an <applet> tag which let you embed (gulp) Java code in a web page. Sounded great at the time: code in a web page! And Netscape had browser versions for Windows, for Mac, for Sun workstations…you could write an applet and it would magically work on all of them. Wow!

A word on security (also kind of a new idea at the time, not widely understood and – in my view – universally underestimated). A web page could run Java in what was called a sandbox, meaning that it was actually isolated from the various aspects of the platform – the idea being you didn’t want to run a web page that deleted all the files on your PC, or scanned it for personal information.

I’ll have more to say about applet security in a moment.

Enter Your Hero

Somewhere around this time, being between projects, I started playing with Java. I had in my possession a chunk of source code that Jonathan Sachs, the original author of 1-2-3, had himself written as an experiment to test the then-new (to PCs: yes, purists, I know it had been around on Unix for years) C language. (How archaic that sounds today!) I have to say before going forward that Sachs’ code was just beautiful – elegant, readable, and as far as I could see, bug-free.

So I started porting (converting) it to Java. Now Java can trace its roots to C and C++ so the basics were fairly straightforward. However, I did have to rewrite the entire UI to the AWT, because 1-2-3/C, as it was called, was not coded for a graphical interface.

And…it worked!

I started showing it around to my friends at Lotus and ultimately to the senior managers, including the Co-CEOs, Jeff Papows and Mike Zisman, who saw it as a new way to compete against Microsoft.

Could we build a desktop productivity suite hosted in the browser that runs on all platforms and thus do an end-around around the evil Redmondians?

Things Get Complicated

Suddenly (or so it seemed to me) my little prototype had turned into a Big Corporate Initiative. Some of my friends and colleagues started playing with Java as well, and soon we had miniature versions of an email client, charting, word processing based on our thick client app Ami Pro, calendaring and scheduling based on Organizer, and presentation graphics based on Freelance Graphics.

And my colleague Doug Wilson, one of the 1-2-3 architects, came up with a brilliant way to integrate applets using a publish-and-subscribe pipeline called the InfoBus, the API to which we made public so anybody could write a Kona-compatible applet.

InfoBus was really an amazing innovation. With Infobus we were able to componentize our applications, letting users create what today would be called composite apps. The spreadsheet applet was separate from the chart applet but communicated through the Infobus – giving the illusion of a single, integrated application. So in the screenshot above you see the spreadsheet applet and the charting applet hosted on a web page.

Twenty-five years ago this was pretty awesome.

To make it all official, we had a name for our stuff: “Codename Kona,” we called it, playing off of the coffee theme of Java. (Get it?) Personally I loved this name and wanted it for the official product name…but there were issues. More on this in a moment.

And then a few things happened.

IBM

In June of 1995, IBM (heard of it?) bought Lotus. I heard the news on the radio driving in to our Cambridge, Massachusetts office, and was both horrified and relieved. Lotus – frankly – wasn’t doing all that well so getting bailed out was good; but IBM? That big, bureaucratic behemoth?

IBM purchased the company primarily for Notes, as their mainframe-based email system, Profs, was an abject failure in the marketplace, and Notes, far more technologically advanced, was doing fairly well. And since everybody needed email, owning the email system meant you owned the enterprise – at least that was the contention, and the investment thesis.

To my surprise, IBM showed far less interest in the desktop apps (which we’d named SmartSuite to compete with Office). They couldn’t care less about what was arguably one of the most valuable brands of the time – 1-2-3. But Kona fit into their networked applications strategy perfectly, which (I suppose) beat some of the alternatives at least.

The Network Computer

IBM had another strategy for beating Microsoft on the desktop, and again, Kona fit into it like a glove: the network computer. The NC, in essence, was a stripped-down PC that only ran enough system software to host a browser – no Windows, no Office, everything runs off the servers (where IBM with mainframes and AS/400’s ruled in the data center, and Sun dominated the web).

Oh, my. So we split up the teams: one focused on delivering Kona for browsers, the other, led by my late friend the great Alex Morrow, for the NC.

Lotusphere

Jeff and Mike, our co-CEOs, wanted to showcase Kona at Lotus’ annual developer convention, Lotusphere, held every winter at Disney World in Florida, at the Swan and Dolphin auditorium. Ten thousand people attended in person. (Hard to imagine these days.)

Including, by the way, the CEO of IBM, Lou Gerstner, and his directs.

We had great plans for the keynote address. We developed a script. We hired professional coaches to help us learn the finer points of public speaking. We rehearsed and rehearsed and rehearsed. Larry Roshfeld would do a brief introduction, then I would do a short demo on Windows, and then Lynne Capozzi would show the same software (“write once run anywhere,” remember?) on an NC.

Things went wrong.

First, my microphone failed. In front of this ocean of people I had to switch lavaliers: talk about embarrassing! (These days I tell people I’ve never been afraid of public speaking since; nothing that traumatic could ever happen again!).

But that wasn’t the worst.

In front of all those customers and executives, the NC crashed during poor Lynne’s demo. She handled it with remarkable grace and as I recall she rebooted and was able to complete the demo but talk about stress!

Bill and I

Now as competitive as Lotus and Microsoft were on the desktop, there were, surprisingly, areas of cooperation. For a time, the primary driver of Windows NT server sales was Lotus Notes, and so (again, for a very brief time) it behooved Microsoft to make NT work well with Notes.

And so Jeff, me, and several Notes developers hopped a plane – the IBM private jet, no less! – for a “summit conference” with Microsoft.

We spent a day in Building 8, then where Bill had his office. It was not my first time at Microsoft – I’d been there many times for briefings – but it was to be my first meeting with Bill. After several NT presentations he joined us during Charles Fitzgerald’s talk on Microsoft’s version of Java, called Visual J++ (following the Visual C++ branding). I’ll have more to say about J++ in a minute.

This being my space, I asked a lot of questions, and had a good dialogue with Charles. (I had more conversations with him over the years and always found him to be brilliant and insightful; read his blog Platfornomics, it’s great.) At one point, however, Bill leaned forward and pointedly asked, “Do you mean to tell me you’re writing serious apps in Java?”

To which I replied, “Well, yes.”

“You’re on drugs!” he snapped.

Thus ended my first interaction with the richest man in the world.

Launch

Nevertheless, perhaps because of IBM’s enormous leverage in the marketplace, customers expressed interest in Kona and we got a lot of positive press. Many resonated with the idea of networked applications that could run on a diverse set of hardware and operating systems.

And we were blessed with a superior team of technically talented individuals. Doug Wilson, Alex Morrow, Reed Sturtevant, Jeff Buxton, Mark Colan, Michael Welles, Phil Stanhope, and Jonathan Booth were just some of the amazing, top-tier folks that worked on Kona.

Kona.

As we drew closer to launch, the marketing team started thinking about what to officially name this thing. I – and actually most of the team including the marketing folks – favored Kona: slick, easy to remember, resonant with Java.

We couldn’t, for two reasons.

One: Sun claimed, by virtue of its trademarking of the Java name, that it owned all coffee-related names and they’d take us to court if we used “Kona.” I was incredulous. This was nuts! But we didn’t want to go to war with an ally, so…

Two: it turns out that in Portuguese “Kona” is a very obscene word, and our Lisbon team begged us not to use it. We all were forced to agree that, unlike Scott McNealy’s, this was a fair objection.

The marketing team came up with “eSuite,” which, truth be told, I hated. But I understood it: rumor had it that IBM, our new parent, had paid their advertising firm tens of millions of dollars for their internet brand, which centered around the use of the letter “e” — as in eCommerce and e-business. (Hey, this was 1995!) So our stuff had to support the brand. I guess that made sense.

So What Went Wrong?

eSuite was a beautiful, elegant set of applications created by an incredible team of talented developers, designers, testers, product management, and marketers. So why did it ultimately fail? Others may have their own explanations; these are mine.

Microsoft Got Java Right, None of the Others Did

Paradoxically, the best Java runtime – by far – was Microsoft’s. Sun had written a Java runtime and AWT for Windows but it used a high-level C++ framework called Microsoft Foundation Classes (MFC). MFC, which itself abstracted a lot of the complexity of the underlying windowing and input systems, among others) was great for building business apps (it was the C++ predecessor to Windows Forms, for the initiated). But it was absolutely wrong for platform-level code – the AWT on MFC was an abstraction on top of an abstraction: as a result, it was sssslllooowww. Similar story for Apple, and, believe it or not, for Sun workstations.

Microsoft on the other hand rewrote the Windows version of the AWT directly to Win32, in effect, to the metal. Hence it was way faster. And it re-engineered a lot of other areas of the runtime, such as Java’s garbage collector, making it faster and safer. Not only that, J++, as Microsoft’s version was called, was integrated into Microsoft’s IDE, Visual Studio, and took advantage of the latter’s excellent development, editing, and debugging tools – which no other vendor offered.

I attended the first JavaOne convention in San Francisco. Microsoft’s only session, which was scheduled (probably on purpose) late on the last day, featured an engineer going into these details in front of an SRO audience.

I remember thinking: okay, if you want the best Java, use Windows, but if you’re using Windows, why wouldn’t you just use Office?

Security

Now in fairness, the Java team was very focused on security; I mentioned the sandboxing notion that the applet environment enforced, which has since become a common paradigm. They rightly worried about applets making unauthorized accesses to system resources, like files (a good thing), so at first any access to these resources was prohibited. Later, in v1.1, they implemented a digital-signature-based approach to let developers create so-called “trusted” applets.

But that wasn’t all.

In effect, on load, the runtime simulated execution of the applet, checking every code path to make sure nothing untoward could possibly happen.

Imagine: you load a spreadsheet applet, and it simulates every possible recalculation path, every single @-function. Whew! Between network latency and this, load time was, well, awful.

The Network Computer was DOA

So, if you only want to run a browser, and you don’t need all the features of an operating system like Windows, you can strip down the hardware to make it cheap, right?

Nope.

I remember chatting with an IBM VP who explained the NC’s technical specs. I tried telling him that eSuite required at least some processing and graphics horsepower underneath, to no avail. In fact, as I tried to point out, browsers are demanding thick-client applications requiring all the capabilities of a modern computer.

(Chromebooks are the spiritual descendants of NCs but they’ve learned the lesson, typically having decent processors and full-fledged OSs underneath.)

Sun and Lotus Had Different Aspirations

In a word, Lotus wanted to use Java as a way to fight Microsoft on the office applications front. Basically, we wanted to contain Microsoft: they could have the OS and the development tools on Intel PCs, but we wanted a cross-platform applications that ran on Windows and everywhere else — which we believed would be huge competitive advantage against Office.

To achieve that Lotus needed Sun to be a software development company, a supplier – ironically, to behave a lot like Microsoft’s developer team did with its independent software vendors (ISVs) in fact, with tools, documentation, and developer relations teams.

Sun (as best as I could tell) wanted to be Microsoft, and its leadership seemed to relish the idea of a war (the animosity between Sun CEO Scott McNealy and Bill Gates was palpable). Sun couldn’t care less about allies, as the silly little skirmish over naming proved. But it clearly didn’t understand the types of applications we built, and certainly didn’t understand the expectations users had for their apps. Instead Sun changed course, focusing on the server with Java-based frameworks for server apps (the highly successful J2EE).

Perhaps somewhere along the line it made the business decision that it couldn’t afford to compete on both server and client – I don’t know. In any event the decline of the applet model opened the door to JavaScript, the dominant model today.

Eventually, and tragically, Microsoft abandoned Visual J++ and its vastly better runtime. Why? Some say that Microsoft’s version failed to pass Sun’s compliance tests; others, that Microsoft refused Sun’s onerous licensing demands. In any event, there was a lawsuit, Microsoft stopped work on J++ and some time later launched C#, a direct competitor to Java which has since surpassed it in popularity.

ActiveX

Not to be outdone, Microsoft introduced its own components-in-browsers architecture, called ActiveX. Unlike Java, ActiveX did not use a byte-code approach nor did it employ the code-simulation security strategy that applets had. As a result, ActiveX’s, as they were called, performed much better than applets — but they only ran on Windows. But the FUD (fear, uncertainty, and doubt) ActiveX created around Java applets was profound.

Lotus’ Priorities Changed

Lotus/IBM itself deprioritized its desktop application development in favor of Notes, which was believed to be a bigger growth market. Much as I admired Notes (I’d worked on it as well) I didn’t agree with the decision: Notes was expensive, it was a corporate sell, and had a long and often complicated sales cycle. I never believed we could “win” (whatever that meant) against Microsoft with Notes alone.

It was true that early on Exchange lagged behind Notes but it was also clear that Microsoft was laser-focused on Notes, so our advantage could only be temporary.

Someone told me that “Office is a $900 million business, SmartSuite is a $900 billion business, why fight tooth and nail in the trenches for every sale?” My mouth dropped open: why abandon almost a billion-dollar revenue stream? (Office is now around $60 billion in annual revenue, so staying in the game might have been good. Yes, hindsight.)

eSuite Was Ahead of its Time

Today, productivity applications in the browser are commonplace: you can run Office applications in browsers with remarkably high fidelity to the thick client versions. Google Docs offer similar, if more lightweight, capabilities.

Both of these run on a mature triad of browser technologies: HTML, JavaScript, and CSS. And the PCs and Macs that run these browsers sport processors with billions of transistors and rarely have less than 8 gigabytes of memory – hardly imaginable in the mid-1990s.

And eSuite depended upon secure, scalable server infrastructure conforming to broadly accepted standards, like authentication, and high-speed networks capable of delivering the apps and data.  

All that was yet to come. Many companies were yet to deploy networks, and those that had faced a plethora of standards — Novell, Lanman, Banyan, and so on. Few had opened their organizations to the internet.

eSuite’s Legacy

I hope you’re getting the idea that the era of eSuite was one of rapid innovation, of tectonic conflict, competition, and occasional opportunistic cooperation between personalities and corporations, all powered by teams of incredibly skilled developers in each. The swirling uncertainties of those times have largely coalesced today into well-accepted technology paradigms, which in many ways is to be applauded, as they make possible phenomenally useful and remarkable applications like Office Online and Google Docs (which, I’m told, is now called “GSuite”). In other ways – well, all that chaos was fun.

I wonder sometimes if eSuite might have seen more adoption had Lotus simply stuck to it more. To be fair, IBM, which had originally promised to remain “hands-off” of Lotus, increasingly focused on Notes and its internet successor, Domino; I’m guessing (I was gone by this time) that they saw Domino as their principal growth driver. Desktop apps were more or less on life support.

Still, by the early 2000s the concepts of web-based computing were becoming better understood: the concept of web services had been introduced; PC’s were more capable, and networks standardized on TCP/IP. Who knows?

Timing, they say, is everything.

Composability and Events

Apparently one of the new buzzwords is composability, meaning everything from reorganizing (“pivoting”) your business quickly in response to changing market conditions to adding new technical capabilities to your applications as needed. As new features come online, the story goes, you should be able to seamlessly (that word!) add them to your applications as you need them, and ditch the ones you don’t need any more.

Now, let’s see, where O where have I heard this story before? DLLs, Java Applets, ActiveX, Enterprise JavaBeans, Service-Oriented Architecture, Service Provider Interfaces, the API Economy: it seems like every few years we have to rediscover how utterly cool modularity and (if we’re really chic) loose coupling are.

Technically, composability appears to mean something like a combination of SPIs and APIs. Microsoft touts the fact that it’s easy to add a FedEx module to Dynamics to enable shipping when it absolutely, positively has to be there overnight.

Cool.

Real composability, it seems to me, means a near-infinitely malleable product whose behavior can be adapted to any reasonable need.

How do you do that? (What does that even mean?)

Of course part of the answer involves a good, solid set of APIs to an application, documented, hopefully, with OpenAPI (nee Swagger) or something similar. Enough has been written about Why APIs Are Good that I’m not going to repeat their virtues.

But what about when you want to change, or augment, or even replace the core processing of an application feature? Well, of course many applications support events so you can know when they’re about to do something, or when they’ve done something.

But back in the day doing Lotus 1-2-3 my team and I decided we needed something more powerful. Our scripting language team (LotusScript) was demanding deep access to the product internals, and our addons like our Solver, even deeper ones. They needed to execute code in some cases before the relevant application code, in some cases after, for example, sideloading a file needed by the addon. And in certain cases – for example, loading a file type not supported by the original app – they needed to replace the existing code.

We had a pretty comprehensive set of APIs. But they didn’t solve the problem.

The Problem

Here’s the core idea: imagine a file load routine (this is pseudocode, so don’t get upset):

Pretty straightforward: parse the file extension and pass it off the right handler. No worries.

But what if you want to load a PDF? Or a text file? Or an MP3, for whatever reason? (Hey why not?)

Introducing the Event Manager

The idea of our Event Manager was simple: an addon could register for an event that happened before the core code ran, and/or an event that ran after the core code. In addition, the addon could return one of three values:

  • Ran successfully
  • Ran successfully, and bypass core code
  • Error

In other words, something like this:

Here you can see the first thing that happens is any addons that have registered for the “OpenFile” Before-Event get notified, and can either ignore, augment – or replace – the core handling, and thus can load a wholly new file type, if desired. (EventManager.BeforeEvent() fans out the event to all registered addons.)

The After-Event has less options, for obvious reasons. It can be used for logging, or can be used to (say) load a shadow file (as many of the 1-2-3 addons did). In this case the addon has to handle any errors that occur as the core code may not understand the addons’ semantics.

Value

We found this pattern very useful in 1-2-3, so much so that I ported the concept to Lotus Notes some time after. In some ways, I think, this provides a good benchmark of what composability should really be.

Loyalty and Competence

A recent document allegedly leaked from the Kremlin accuses the Russian hierarchy of being based upon loyalty, not professionalism. “Accordingly,” the author writes, “the higher the level of leadership, the less reliable information they have.”

This raises some interesting questions: shouldn’t, after all, an organization have an inherent basis in loyalty across the levels of the hierarchy? If so, which is more important, competence (or professionalism) or loyalty?

Let’s spend a moment examining this dichotomy. I’ll posit – because I’ve seen them – in business there exist loyalty-centric organizations and competence-based organizations. Each has their merits, but each has serious weaknesses.

The Loyalty-Based Organization

Upon ascending to the American presidency, Donald Trump famously asked his staffers to swear their personal loyalty to him. Whether this was because he felt insecure in his new role, or threatened, or because he had some other motive will likely never be known.

Similarly, in the military, loyalty is a mandate: follow orders or people die.

Every manager wants his or her teams to have some amount of personal loyalty; that’s only human. Loyalty-based organizations take this to an extreme, however: the most loyal get the biggest raises, the juiciest assignments, and so on.

Still, such organizations have advantages. For example, a manager’s wish is followed – quickly – to the letter, which can be very satisfying (for the manager), and such organizations as a result often develop the reputation that they “get things done.”

However, there are some obvious downsides. A manager may hire less competent individuals – or favor them — if he or she deems them loyal, which results in the overall organizational capability to be lowered. Moreover, highly skilled employees will often recognize the existence of a clique – and leave. The work product of such a team will not infrequently be mediocre.

The Competence-Based Organization

At the other end of the spectrum, competence-based organizations place the highest values on skills, knowledge, and professionalism. The driving factor in such organizations is not coming up with an answer, but rather the best answer – often, regardless of how long it takes or whose feelings get hurt along the way.

Competence-based organizations typically seek employees with the highest degrees, with the most accomplishments, but often have trouble keeping them; who wants to stay in a place where analysis takes precedence over accomplishment, where argument is the order of the day? Moreover, what manager wants to stay where employees have no respect or loyalty?

The Ideal

Obviously, organizations should strive for some balance between the two; it’s vitally important for teams to distinguish the relative values of competence and loyalty and strive to create a corporate culture that supports both, one in which healthy, animated discussion of options has its place, in which decisions are made with an open mind – but they are made.

In the real world of course most organizations swing more to one side or the other. As an employee you should know which your organization is; and as a manager, which of the two management styles you’ve created, and perhaps think about making adjustments.

So What Do You Do?

Well, your first decision is do you want to stay in this organization?

Assuming the answer is yes, then if you’re on a loyalty-centric team, it’s probably a good idea to demonstrate loyalty, perhaps by complimenting your boss (“Good idea!”) every now and then, or giving him/her credit (and maybe overdoing it a bit) during a meeting with your boss’s boss — even for one of your ideas! That sort of sucking up can be distasteful, but, hey, you said you wanted to stay.

If you’re in a competence-based organization, put on a program manager hat every now and then and see if you can drive decisions or an action plan (“I see we’ve got just five minutes left in this meeting, what’s the next step?”).

Sometimes, incidentally, what appears to be a competence-based team isn’t really — it’s just that the manager is afraid to take responsibility for a decision. If that’s the case, consider making the decision yourself (assuming you’re okay with the risk). That way the manager can feel comfortable that there’s someone else to point at if things go south (like I say, only if you’re comfortable with taking the responsibility).

Measuring the Value of Software Architecture

By Barry Briggs
[JUST A DRAFT RIGHT NOW!!]

Over the past few months I’ve been working with some old friends at the International Association of Software Architects (IASA) to try to figure out some way to quantitatively measure the value of software architecture. We’re trying to come up with answers to the following questions:

  • Why is software architecture good (i.e., why do you need software architects?)
  • How can you quantitatively assess an application or service?
  • What makes a good software architect?

These are difficult questions, particularly when you compare software architecture with other fields. For example, it’s relatively easy to quantify the value of a Six Sigma process-improvement organization: you measure time, resources required, and costs of a process before optimization, and then after, and you have a solid measurement of value – one that is simply not possible with software architecture.

Why?

Well, on a net-new project, architecture is applied at the very beginning, so it’s difficult to know if the lack of it would have made any difference. Arguably, on a rewrite of a project, one could compare against some set of criteria how much better the new version works vis-à-vis the old one – but there are usually so many other factors in such a project that it’s essentially impossible to separate out the contribution architecture makes. For example, faster hardware or just plain better coding might be the reason the new app runs faster, not the fact that the new design is factored more effectively.

The Army Barracks

Perhaps an analogy can help us tease out how to think about these questions. Software architecture is often compared (poorly) against physical, building architecture – but let’s try to make the analysis a bit more constructive (pun intended).

Consider something as mundane as an army barracks. How would we measure the quality of its architecture?

I suppose there are lots of ways, but here are mine.

First and foremost, does it do the job for which it was intended? That is, does it provide enough room to house the required number of soldiers, does it provide appropriate storage, bathrooms, and showers for them? Is it well insulated and heated? In other words, does it meet the immediate “business need?” If not – well, you certainly couldn’t assess its architecture as good in any way.

Then we could ask many other questions, such as:

  • Compliance with laws and standards, that is, building codes, Army regulations, local standards, and so on. Like business need, this one’s binary: if not compliant, no need to perform any additional evaluation.
  • How resilient is it? Can it withstand a power failure, a Force 5 hurricane or (since this is a military installation) a direct hit by an artillery shell?

  • How much load can it take? If there’s a general mobilization and much more space is needed, how many extra beds can it hold? 2x? 5x? 10x, in a pinch?

  • New workloads. The Army mandates that barracks become coed. Can the facilities be quickly adapted – if at all – to support separate sleeping areas, bathrooms, etc.?

  • How easy is it to add new features? For example, does it require a teardown to add air conditioning or can existing heating ducts be reused in the summer? How hard is it to install wi-fi hubs?

  • What about new components? Say the Army mandates that every barracks has to have a ping-pong table, which entails a building addition. Can such a thing be done quickly with minimal disruption?

  • Business continuity. Say the barracks does fall down in a storm. Are there sufficient facilities on the base – or on other bases – that the soldiers can rehoused?

  • Aesthetics. OK, maybe this isn’t a good one for a barracks, but for other types of buildings – think I.M. Pei or Frank Lloyd Wright – aesthetics drive our view of good architecture.

You get the idea, and, hopefully, the analogy. In this case the value of good design – of architecture – is readily apparent.

Assessing Software Architecture

When we think about software architecture, we can apply similar criteria.

Business Need

If the software doesn’t satisfy business requirements, then – as we said above – it by definition cannot be “well-architected.” Determining how well software meets the need, however, can be an interesting and challenging discussion. For years, software development began with requirements documents, which could stretch to tens, hundreds, even thousands of pages; and managers would simply tick off the features that were implemented. (And as often as not by the time all the documented requirements were met, the business environment had changed, and the app was behind.)

With agile development, users are much more involved in development from the start, tweaking and mid-course-correcting the product during the development process. If there is a requirements document, it represents the starting point rather than a final statement – and this is good, because as the product takes shape, opportunities always present themselves, both to users and developers.

Still, how do we assess how well the product meets the need? Of course, one way is to ask users if they have the features they need; if not, something’s obviously missing.

But that’s not all.

Every line of code, every non-code artifact (e.g., images) should be traceable back to the business requirement. If there is a feature, somebody should be using it. Monitoring tools can help track which features are exercised and which are not. (The Zachman Framework was an early approach to documenting traceability.)

This applies to infrastructure as well. As infrastructure is increasingly documented through Infrastructure-as-Code (IaC) these Terraform or ARM or CloudFormation configurations should justify their choices: why – from a business perspective – this or that instance type is required because of expected load, SSD storage is needed because of anticipated IOPS.

Standards and Compliance

Like satisfying the business need, complying with relevant standards is binary: the software does or it doesn’t, and if it doesn’t, you’re done.

Now by standards we don’t mean “best practices” – we’ll talk about those in a moment. Rather, ensuring that personal data is anonymized in order to comply with GDPR, or that two-factor authentication against a central corporate provider (such as Active Directory) is used, or that only certain individuals have administrative privileges: where such standards are in place, they are mandatory, not complying places the organization at considerable risk, and thus the system cannot be assessed as well-architected.

However, best practices can be more flexible. For example, a cloud governance team may mandate the use of a particular cloud provider, a certain set of landing zones, a particular relational database, and so on. In rare cases exceptions may be granted. Here the goal of such guidelines is intended to speed development and ease operations, by removing the need for every development team to waste time selecting the appropriate provider or service and for operations teams to learn them all.

Granting such exceptions must be intentional, that is, careful analysis should uncover the core need for the exception; it should be documented and possibly, the best practice should be updated.

Defining Your Software Architecture Strategy

As is true with best practices, the definition and importance of other aspects of software architecture will necessarily vary from organization to organization. When developing architecture assessments, organizations should consider what their goals regarding software architecture are. For example, what are the relative priorities of:

  • Application performance
  • Application scalability
  • Developer productivity
  • Business continuity, including RTO/RPO
  • Application visibility (observability) and self-healing
  • Software extensibility
  • Ease of upgrade
  • Usability (e.g., is it mundane/functional or beautiful?)

For example, for non-multi-national organizations georedundancy or multi-regional replicas may not be necessary. Others may decide that the expense of active-active BC/DR solutions is too high.

Moreover, different applications will attach different levels of importance to these criteria. For example, an intranet application that shows cafeteria menus need hardly be georedundant or be built with microservices – it wouldn’t hurt, but perhaps resources could be devoted elsewhere!

Strategy to Principles to Assessment

Having defined the organization’s strategic goals from software architecture – i.e., what is good software architecture and why it’s necessary – actionable principles can be developed. By “actionable” we mean that developers can look at them and understand what must implemented, and perhaps even how.

For example, if a key strategic goal is that applications should be extensible, then a principle – that a developer can use – is that apps should have a REST API, documented with OpenAPI or the like.

A good starting point can be popular industry principles, such as the The Twelve-Factor App originally intended to guide the development of SaaS applications but in fact is very broadly applicable (shown below, via Wikipedia).

# Factor Description
I Codebase There should be exactly one codebase for a deployed service with the codebase being used for many deployments.
II Dependencies All dependencies should be declared, with no implicit reliance on system tools or libraries.
III Config Configuration that varies between deployments should be stored in the environment.
IV Backing services All backing services are treated as attached resources and attached and detached by the execution environment.
V Build, release, run The delivery pipeline should strictly consist of build, release, run.
VI Processes Applications should be deployed as one or more stateless processes with persisted data stored on a backing service.
VII Port binding Self-contained services should make themselves available to other services by specified ports.
VIII Concurrency Concurrency is advocated by scaling individual processes.
IX Disposability Fast startup and shutdown are advocated for a more robust and resilient system.
X Dev/Prod parity All environments should be as similar as possible.
XI Logs Applications should produce logs as event streams and leave the execution environment to aggregate.
XII Admin Processes Any needed admin tasks should be kept in source control and packaged with the application.

We can learn several things from 12-Factor:

Principles Must be Easy to Understand, and Actionable

There are many ways of framing principles, of which 12-Factor is just one. What is key is that developers should intuitively understand what it means to implement them. For example, in 12-Factor, “any needed admin tasks should be kept in source control” easily translates to putting IaC artifacts in a GitHub repo.

Another common approach to documenting principles is called PADU, which stands for Preferred, Acceptable, Discouraged, and Unacceptable. PADU is attractive because it enables a range of options. For example, a “Preferred” approach to project management might be the use of an online Kanban board; “Acceptable” might be a form of Agile; use of waterfall methodology might be “Discouraged;” and using Excel for project management would be “Unacceptable.” Governance bodies (or the teams themselves) can then score themselves on a 0-3 basis and require a minimum score to deploy.

Principles Must Evolve

Organizations must recognize that owing to technical advances the principles may – and must – change over time. For example, the sixth “factor” above mandates that processes should be stateless; yet in today’s world it is increasingly possible, both from a technical and cost-effectiveness point of view to maintain state in business logic in certain circumstances.

Organizations Must Have Their Own Principles

Again, organizations may interpret industry principles according to their priorities and needs. Moreover they can – and should – add their own. For example, 12-Factor does not mention building zero-trust computing ecosystems and for many, if not most, this is essential.

Assessing Software Architecture

Having created a robust set of principles, it’s relatively straightforward to measure the degree to which a given product or service adheres to them. Many organizations use scorecards to rate software in an architecture review process, with minimum passing grades.

The Value of Software Architecture

A not-so-obvious conclusion from this exercise is that there are fundamentally three value propositions of applying software architecture strategies, principles, and assessments:

Usefulness, in other words, ensuring that the software does what its users want it to do, in terms of features, availability, and scale, to name a few.

Risk mitigation. Compliance with regulations and standards helps reduce the probability of a business or technical disaster.

Future-proofing, that is, enabling the product to grow both in terms of new features and the ability to exploit new technologies.

It’s exceedingly difficult to quantify the value of architecture (and architects), however. Yet it is intuitive that software cost estimation models such as Cocomo (Constructive Cost Model) which base estimates on line of code (specifically, e=a(KLOC)b) could benefit — i.e., improve their accuracy — by including coefficients for architectural influence.

Many thanks to Miha Kralj of EPAM Systems, Jim Wilt of Best Buy, and Bill Wood of AWS for their comments and suggestions. Errors of course are my own.

Predictions for 2022

Well, it’s that time of year when everybody writes their predictions for the year FWIW, which, given the track record of most such posts, probably isn’t much.

Here are mine … but first … a disclaimer: these are opinions which do not necessarily reflect those of anyone I work for, anyone I have worked for or will work for. Hell, with my ADD, they may not even represent my own opinions five minutes from now.

I Learned about * From That: Resolving Conflict

One of the great privileges we who worked at Lotus Development back in the eighties and nineties enjoyed was access to many remarkable events at MIT, Harvard, and other area institutions. In retrospect, some of these became legendary: Nicholas Negroponte at the Media Lab, the first public introduction of Project Athena, whose graphical interface later became the basis of X-Windows, and, ultimately, the Mac.

Perhaps the moment that stayed with me the most, however, and which I have since recounted any number of times, was a panel discussion in which Marvin Minsky (the father of AI) and Seymour Papert (co-inventor of the Logo programming language, among other things) took part.

Seymour (I don’t know if this is still true, but everybody in those days referred to everyone else, regardless of stature, by their first name; at one event the legendary Arthur C. Clarke conferenced in from his home in Sri Lanka — everybody called the inventor of the artificial satellite and author of 2001 “Arthur,” which I found a bit disconcerting) told a story of his youth, when he was a math teacher.

(A caveat before I start: it’s been over 30 years since this event took place; I may not have recalled the details precisely. But the main points — and the moral of the story — are correct.)

Anyway, it went like this:

Though born and raised in South Africa, Seymour received a PhD from the University of Cambridge, and then taught at various universities around Europe. Passionate about mathematics education, at one point in his life he returned to his homeland to instruct teachers about the “New Math,” that is, new approaches to mathematics pedagogy. (These days I suppose we’re all a bit jaded about “new maths,” since there have been so many of them.)

He went to village after village speaking in lecture and town halls, advocating for the new methodology. But he often noticed that as he spoke, slowly, but inevitably, people would quietly leave; sometimes only half the audience was left at the end of the lecture.

Finally he asked someone: why, he wanted to know, were people walking out on him?

It’s the way you present it, he was told. Western-educated people tend to resolve conflict in terms of deciding which of several viewpoints is right and wrong (or, in Hegelian terms, thesis and antithesis).

Here in the bush, however, we do things differently. We sit around in a circle round the acacia tree. One person proposes an idea. The next person respectfully acknowledges the idea and proposes some modifications.

And around the circle they go, until there is consensus.

Thus, at the end, they have a mutually agreed upon idea or plan. No one is “wrong.” No one is “right.” Since everyone has contributed, and agreed, everyone is “bought in,” to use the Western term.

I’ve used this approach countless times, and while it can be time-consuming, and does require a modicum of patience and maturity from all participants, it does work.

A Few Words about Lotus

[This post is a response to Chapter 49 of Steven Sinofsky’s wonderful online memoir, in which he talks about competitive pressures facing Office in the mid-90s. Unfortunately you have to pay a subscription fee in order to comment, so I’ll comment here instead.]

Steven,

Excellent and thought-provoking piece – brings back a lot of memories. During the time described, I was a senior developer at Lotus and I’d like to offer a couple of clarifications.

Re components: you (understandably) conflate two separate Lotus products, Lotus Components and Lotus eSuite. The former were a set of ActiveX’s written in C/C++ and were targeted at app construction. My late friend and colleague Alex Morrow used to talk about developers becoming segmented as “builders” and “assemblers.” “Builders” created the ActiveX controls, and “assemblers” stitched them together into applications. The idea persists today in the form of Logic Apps and so-called citizen developers. We at Lotus had some brilliant developers working on Components but I suspect the concept proved premature in the market.

Even more ahead of its time was Lotus eSuite, which was a set of Java applets designed to run in the browser. eSuite got its start when a developer (actually, me) ported Lotus 1-2-3 v1 to Java as an experiment; Lotus and IBM loved it because it was perceived to be disruptive against Office, which while not yet dominant threatened Lotus’s SmartSuite.

Ironically, however, eSuite ran (by far) the best on Windows and IE. I recall attending the first JavaOne, where, at a breakout session, Microsoft demonstrated its rearchitected JVM and Java libraries – vastly better in terms of performance and load time than the Sun-supplied versions. (This was partly due to the fact that where Sun built the Windows libraries on MFC – pretty clunky at the time, Microsoft wrote to Win32, essentially, right to the metal.) And, of course, the IDE, Visual J++, supported the UI nuances and superior debugging experiences that we’d come to expect. It really was, as you quite rightly say, a tour de force.

But it was clear to us at Lotus that Microsoft had mixed feelings about it all. I and several others traveled to Redmond (aboard an IBM private jet no less!) to talk with Microsoft execs about the future of NT and Java (why NT? Because at the time the Lotus Notes server was one of the key – if not the key – driver of NT sales). In a day full of briefings in Building 8 Charles Fitzgerald, then the PM for VJ++, came last, and for that we were joined by BillG, who couldn’t believe we were building “serious apps” on Java. (He told me I was “on drugs.”)

I always thought Microsoft’s abandonment of Java was a bit of a shame: I’d written an email to David Vaskevitch (then the Microsoft CTO) suggesting that Microsoft’s superiority in development tools and frameworks could be used to essentially isolate competitor OS’s – essentially wrapping Solaris in a Microsoft layer. I never heard back.

As it happened, we did ship Lotus eSuite – and it remained the case that neither Macs nor Sun workstations could compete performance-wise with Windows. (To this day I’m stumped why neither Apple or Sun didn’t try harder to make the technology competitive – it was existential.)  And JVM and browser technology were at the time still evolving, so what worked on one platform wasn’t really guaranteed to work on another (belying the “write once run anywhere” slogan).

eSuite also suffered from a particularly stupid design decision in the JVM (which I believe Microsoft was contractually obliged to implement as well). In order to prevent code from jumping out of the sandbox the JVM, on load, analyzed every single code path before launch. For an app like a spreadsheet, which has hundreds of functions, recursive recalculation, directed acyclic graph, etc., the performance hit was murderous. I recall wondering why Sun et.al. couldn’t use digital signature to implement trust but they never quite got the idea.

Anyway, the time for running productivity apps on the browser, unquestionably a great idea, hadn’t hit yet.  (It has now.)

I Learned about * From That: The Five Questions

For much of my career I functioned as a technical manager in the software industry, ranging from leading small development teams to serving as CTO for a division of one of the world’s largest technology firms. In those times I heard a LOT of presentations from bright and earnest people with terrific ideas: for new products, for new projects, for new initiatives. (And by the way, I made not a few of these pitches myself.)

After a time I noticed a pattern. Many presenters – especially younger ones – didn’t get this simple truth, that during a presentation managers are constantly thinking about two things: one, what decision do I have to make? And two, am I getting enough information to make the decision?

After all, that’s what managers get paid to do: make decisions. So above all else, your job in presenting is to tell – not hint, tell – the manager what the decision is. (I hate guessing what it is I’m supposed to decide.) That should be on slide 1 and on the last slide, and maybe sprinkled about the presentation.

So: be explicit about the decision: “we should buy Acme Industries.” “We should replace this database sooner rather than later.” “We should hire more people.”

Now as far as the second part, what sorts of information should you provide your manager?

Answer the Five Questions, and you’re off to a good start.

Question One: What is it?

What are you proposing? Tell your manager precisely what it is you’re suggesting doing, in as much detail as you think he or she needs to make the decision and no more. Don’t skimp and (and this happens much more frequently) don’t inundate your manager with details. The more detail you have, the more likely you are to find yourself at the bottom of a rathole with no hope of ever emerging. Your manager probably doesn’t need to know the details of all the error codes returned by this or that API.

Knowing how much information to supply is tricky and depends as much on the manager’s personality and interests as upon the merits of the proposal (and also, how much time you have: if your meeting is set for one hour, don’t waste it all on Question 1). Be smart and disciplined in your presentation.

Here’s a side tip: maintain control of your presentation. Your manager may think that he or she wants to know all those error codes, or some other details (or someone else in the room may ask). Try hard not to go there. Stay focused.

After all, you’ve got four more questions to answer.

Question Two: Why is it Good?

Why are you suggesting this idea? Why do we need it?

Specifically, what value will this thing whatever it is bring to us, to our organization, and/or to our company?

Will it save us money? Bring us a new revenue stream? Allow us to execute on our strategy quicker or more efficiently? Be as quantitative as you can: ”deploying this software will save us 20% year over year starting in 2022” or “implementing agile methodology will allow us to respond in days rather than months to new market opportunities.”

Question Three: Where Does it Fit?

I get it. It’s good. But we’re running a business here and we’ve already got stuff.  

Where does this idea fit in our ecosystem? If this is a new software package, how do we integrate it with the software we already have? Is this replacing something we already have? Or is it something new? (Please, please show me a picture.) What do we have to do connect it to what we’ve already got? Does it conform to our enterprise standards and/or generally how we do things? How much do we have to change our existing stuff to make it compatible with the new thing?

How do users use it? What new things will they/I have to learn? Who’s going to feel left behind? What’s your transition or change management plan? Do you anticipate resistance to your project once it starts? How will you handle it?

Question Four: How Much is it Going to Cost?

It’s not just the procurement costs, that is, the capital or operating expenses I’m going to authorize to be shelled out, although I definitely need to know that. How many people do I have to assign, and for how long?

And: What am I not going to be able to do because I’m doing this? It’s amazing how many projects sound just wonderful in isolation but then lose some of their luster when stack-ranked against other worthy proposals.

Question Five: When Will It Be Done?

How long will all this take? Is this a very straightforward, well-bounded project, or is this likely to be one of those projects that goes on, and on, and on? Make no mistake, I’m going to hold you to this, and while I understand projects slip for good reasons sometimes – bugs, key people quit, dependencies fall through – I’m expecting you, to the best of your ability, to make good on your schedule promises.

It’s not just about you. As a manager I have lots of other projects under way, and I may want to know, for example, when the resources you’re using will free up for another project.

[EDIT]: SteveB’s Question!

My pal Wes Miller reminds me that Steve Ballmer’s favorite question was “If we build this, how many zeroes does it add to revenue per year?”

Easy, Right?

If you can answer these five questions, then I guarantee we’ll have a productive conversation – no guarantee, however, that your proposal will be accepted. But I’ll come away feeling I had enough data to make the decision.

Oh, and …

One other thing: if I ask you a question to which you don’t know the answer, just say you don’t know. And – key point here – then say you’ll find out. Not knowing is understandable, but you need to always follow up.