Barry’s Holiday Wish List for Windows

Now as you all know, I love Microsoft Windows. I have used it and its predecessor DOS since the early 1980s (yes, I’m old); its evolution over the years has been little short of amazing. And of course I worked for Microsoft here in the Pacific Northwest for a decade and a half.

That said.

I have a number of everyday gripes that I just wish Microsoft would fix for once and for all. None of these are, in my view as a software developer of fifty years’ standing (wow) appear very difficult – so please, team, just fix them!

In no particular order:

Make Authenticator Work With Apple Watch

Unlike my younger comrades, my iPhone is not an appendage to my body. Often (real often) I’m at my PC and some account times out, I have to type in the magic number and…where did I leave my phone?

I imagine there’s some Bluetooth security issue with making it work on the watch, but why can’t we fix it?

Let Outlook and Teams Share Identities

How many times have you had to sign into your email account (using Authenticator) and moments later had to repeat the process with Teams?

This feels like the relevant engineering groups have to have a meeting. Just saying.

Settings and Control Panel

Just this morning I was attempting to move the default location of Windows Update from C: to D:. It’s not clear this is even possible, but searching for answers yields any number of inconsistent results – because with nearly every release of Windows some Settings move, change, are deleted, or move from Control Panel to Settings, or whatever.

Dear Microsoft: rid of Control Panel for once and for all. Or Settings. Whatever. And then don’t change the UI. Ever.

Sound: Part 1

Save the last volume setting and don’t reset it to the super-loud default for no apparent reason. Every time I load Teams or YouTube Windows blasts my ears.

Sound: Part 2

This one’s a bit esoteric but applies, I imagine to any musician attempting to use Windows. I play the pipe organ (badly) and use Hauptwerk. There’s a known issue with the Windows Audio subsystem in which input from MIDI keyboards is batched – which means when you press a key there’s noticeable latency. It makes Windows essentially unusable for MIDI (music) devices unless you buy external hardware (I use an inexpensive Presonus AudioBox).

This works with no issue on a Mac – should be easy to fix on Windows.

Clean Up the C: Drive

I’ve complained about this before. Microsoft installs apps on C:\Program Files, C:\Program Files (x86), C:\Users\You\AppData (and three folders within)…why? (And \AppData is hidden!)  Macs just have /Applications. It’s a mess.

Moreover: there’s so much junk on the C: drive, some of it from Microsoft, a lot of it from vendors – like the 13GB (!) installer for my keyboard and mouse from Razer. There are .DMP files, log files that never get purged or deleted but rather grow forever, literally occupying tens of gigabytes of space. Microsoft should develop and enforce rules about how the C: drive is used. It’s the Wild West now.

What Changed?

Because I have a relatively small C: drive (256GB SSD) I keep an eye on free space (I wrote my own df command-line app to report free space.)

One day I have 13GB free. Another 8GB. Then 4GB, 2GB. Then 10GB. Why? What changed? (It wasn’t a Windows Update.)

I use the invaluable Wiztree to report on disk usage but it doesn’t show what changed from one day to the next. And I would like to know – and control – when and where the downloads happen.

Why Is It Slow?

Recently on my machine (an i9 with 64GB RAM with up to date antivirus) that old reliable Ctrl-Alt-Del app Task Manager takes forever to load. And sometimes (like right now) it displays the white “(Not responding” title bar).

Why? Not even Bing Chat or ChatGPT can help, other than to give some banal and useless advice.

Ultimately I’d really like to know What My Machine Is Doing, and have the tools to (easily) dive down to the bit level. I fear, however, that’s a whole new OS rewritten from scratch.

Fixing Alexa

The news has it that Amazon is laying off “thousands” of workers, many of them in the Alexa group.

That’s too bad, but probably not surprising. As an outsider, I view Alexa as a technology with an identity crisis — it tries to do many, many things and does none of them particularly well.

Don’t get me wrong. I love Alexa — I have an Alexa Show in (almost) every room in the house. But as useful and (occasionally) fun as it is, it can also be incredibly annoying.

Here’s my recipe for fixing it:

  1. Forget about Alexa “helping” Amazon. I won’t ever buy anything through Alexa. Forget it. Alexa is not a supporting character in the Amazon universe: it’s not a new “channel”; it’s a star in its own right. Stop advertising.
  2. Forget about “monetizing” Alexa. Forget it! Stop wasting time and build stuff I’ll get a kick out of. Make your money from the sale of the devices.
  3. Embrace what Alexa is used for. All of our Alexa Shows are primarily used as digital picture frames connected to Amazon Photos. Yeah, and the weather screen is helpful too. Oh, yeah, the timer app is helpful in the kitchen.
  4. Embrace what Alexa could be used for. The most exciting use case for Alexa is driving home automation. Make it work seamlessly with Blink and all the other gadgets (and, by the way, how about some really high-end home security products? 24×7 video monitoring, etc.). Build in all the home automation protocols — Zigbee, etc. Interoperate with Apple and Google devices — be the first!
  5. Give me management. I have a fleet of Alexas — I want to manage them all from one place (preferably my PC where I have lots of real estate, and absolutely positively NOT my phone where I can barely read the Alexa app’s tiny font!). I want to be able to set preferences and settings for all the Alexas in my home at a stroke. While you’re at it, give us an API that can be used for more than just skills development.
  6. Stop being annoying. Stop showing me yesterday’s news. Stop asking me if I have the flu.
  7. While you’re at it, fix the Photos app. It’s really terrible — it’s slow, it has memory leaks, and does stupid stuff (like it uploads HEICs but you can’t see them on the web or on Alexa). There’s a real opportunity for a great cloud photos app which Alexa could leverage: do it!

That’s for starters. I have a few thousand other ideas but the main thing here is focus. Alexa should be about usefulness in the home, not about selling me more stuff or advancing the Amazon brand.

The Rise and Fall of Lotus eSuite

By Barry Briggs

[This is a draft based on my recollections. I’m sure it’s not complete or even 100% correct; I hope that others who were involved can supplement with their memories which I will fold in. Drop a comment or a DM on Facebook or Twitter @barrybriggs!]

In 1997, Lotus Development, an incredibly innovative software firm that had previously created Lotus 1-2-3, for a time the most popular software application on the planet, and Lotus Notes, for a time the most widely used email and collaboration application, released a set of Java applets called eSuite.

You could say a lot of things about Lotus eSuite: it was, well, very cool, way (way) ahead of its time, and for a very brief period of time had the opportunity of dethroning Microsoft Office from its dominant position. Really. Well, maybe.

But it didn’t.

What went right? What went wrong?

Here is my perspective. Why do I have anything to say about it? Well, I was intimately involved with eSuite. You might even say I invented it.

Java and Platform Independence

In the bad old days of application development, you wrote an app in a language like C or C++ (or even assembler) which compiled/assembled to machine code. That code could only be executed by the specific type of processor in the box, like an Intel 80386. Moreover, your code had to interact with its environment — say, Windows — which meant it had to call upon system services, like displaying a button or a dialog box.

If you wanted to run your code on a different architecture, say a Motorola 68000-based Mac, you had to make massive changes to the source, because not all C compilers were alike, and because the underlying services offered by Windows and Mac were quite different. You coded a button on Windows very differently from one on MacOS or X-Windows. Hence at Lotus we had separate, large teams for Windows, OS/2, and Mac versions of the same product. (In fact, we were occasionally criticized for having spreadsheet products that looked like they came from different companies: the Mac, OS/2, and Windows versions of 1-2-3, built to conform to those platforms’ user interface standards, did look very different.)

Back to our story.

In 1995, Sun Microsystems released the first version of their new high-level programming language, Java. As the first language to compile to byte codes, instead of machine code, it had huge promise because, the theory went, you could “write once, run everywhere.” In other words, each platform – Windows, Mac, Sun, Unix (Linux was still nascent) – would have a runtime which could translate the byte codes to executable code appropriate for that device.

Perhaps even better, Java’s libraries (called the AWT, or Abstract Window Toolkit) also “abstracted” (wrapped) the underlying operating system services with a common API. The AWT’s function to create a button created a Windows button on Windows, a Mac button on MacOS, and so on.

Cool! So why was this more than just a neat technical achievement?

At the time, Microsoft largely dominated personal computing, and its competitors, principally Lotus and Sun, faced existential threats from the Redmond giant. (I’m not going to spend much time talking about how Microsoft achieved this position. There are many varied opinions. My own view, having worked at both Lotus and Microsoft, and thus having seen both companies from the inside, is that Microsoft simply outcompeted the others.)

In any event, many saw Java as a godsend, having the potential to release the industry from Microsoft’s stranglehold. In theory, you could write an application and it could run on anything you like. So who needed Windows? Office?

Browsers

Even cooler, Marc Andreesen’s Netscape Navigator introduced a Java runtime into version 2 of their browser, which at the time pretty much owned the marketplace. Microsoft’s Internet Explorer followed with Java support shortly thereafter.

Everybody at the time recognized that browser-based computing was going to be terribly significant, but web-based applications – especially dynamic, interactive user interfaces in the browser – were primitive (and ugly!) at best. HTML, was both very limited and extremely fluid at the time; the W3C had only been founded in 1994 and in any event the value of web standards had yet to be recognized. Browser developers, seeking to gain advantage, all created their own tags more or less willy-nilly. A very primitive form of JavaScript (confusingly, not at all the same as Java) was also introduced at this time but it couldn’t do much. And the beautiful renderings that CSS makes possible still lay in the future.

Anyway, Netscape and IE introduced an <applet> tag which let you embed (gulp) Java code in a web page. Sounded great at the time: code in a web page! And Netscape had browser versions for Windows, for Mac, for Sun workstations…you could write an applet and it would magically work on all of them. Wow!

A word on security (also kind of a new idea at the time, not widely understood and – in my view – universally underestimated). A web page could run Java in what was called a sandbox, meaning that it was actually isolated from the various aspects of the platform – the idea being you didn’t want to run a web page that deleted all the files on your PC, or scanned it for personal information.

I’ll have more to say about applet security in a moment.

Enter Your Hero

Somewhere around this time, being between projects, I started playing with Java. I had in my possession a chunk of source code that Jonathan Sachs, the original author of 1-2-3, had himself written as an experiment to test the then-new (to PCs: yes, purists, I know it had been around on Unix for years) C language. (How archaic that sounds today!) I have to say before going forward that Sachs’ code was just beautiful – elegant, readable, and as far as I could see, bug-free.

So I started porting (converting) it to Java. Now Java can trace its roots to C and C++ so the basics were fairly straightforward. However, I did have to rewrite the entire UI to the AWT, because 1-2-3/C, as it was called, was not coded for a graphical interface.

And…it worked!

I started showing it around to my friends at Lotus and ultimately to the senior managers, including the Co-CEOs, Jeff Papows and Mike Zisman, who saw it as a new way to compete against Microsoft.

Could we build a desktop productivity suite hosted in the browser that runs on all platforms and thus do an end-around around the evil Redmondians?

Things Get Complicated

Suddenly (or so it seemed to me) my little prototype had turned into a Big Corporate Initiative. Some of my friends and colleagues started playing with Java as well, and soon we had miniature versions of an email client, charting, word processing based on our thick client app Ami Pro, calendaring and scheduling based on Organizer, and presentation graphics based on Freelance Graphics.

And my colleague Doug Wilson, one of the 1-2-3 architects, came up with a brilliant way to integrate applets using a publish-and-subscribe pipeline called the InfoBus, the API to which we made public so anybody could write a Kona-compatible applet.

InfoBus was really an amazing innovation. With Infobus we were able to componentize our applications, letting users create what today would be called composite apps. The spreadsheet applet was separate from the chart applet but communicated through the Infobus – giving the illusion of a single, integrated application. So in the screenshot above you see the spreadsheet applet and the charting applet hosted on a web page.

Twenty-five years ago this was pretty awesome.

To make it all official, we had a name for our stuff: “Codename Kona,” we called it, playing off of the coffee theme of Java. (Get it?) Personally I loved this name and wanted it for the official product name…but there were issues. More on this in a moment.

And then a few things happened.

IBM

In June of 1995, IBM (heard of it?) bought Lotus. I heard the news on the radio driving in to our Cambridge, Massachusetts office, and was both horrified and relieved. Lotus – frankly – wasn’t doing all that well so getting bailed out was good; but IBM? That big, bureaucratic behemoth?

IBM purchased the company primarily for Notes, as their mainframe-based email system, Profs, was an abject failure in the marketplace, and Notes, far more technologically advanced, was doing fairly well. And since everybody needed email, owning the email system meant you owned the enterprise – at least that was the contention, and the investment thesis.

To my surprise, IBM showed far less interest in the desktop apps (which we’d named SmartSuite to compete with Office). They couldn’t care less about what was arguably one of the most valuable brands of the time – 1-2-3. But Kona fit into their networked applications strategy perfectly, which (I suppose) beat some of the alternatives at least.

The Network Computer

IBM had another strategy for beating Microsoft on the desktop, and again, Kona fit into it like a glove: the network computer. The NC, in essence, was a stripped-down PC that only ran enough system software to host a browser – no Windows, no Office, everything runs off the servers (where IBM with mainframes and AS/400’s ruled in the data center, and Sun dominated the web).

Oh, my. So we split up the teams: one focused on delivering Kona for browsers, the other, led by my late friend the great Alex Morrow, for the NC.

Lotusphere

Jeff and Mike, our co-CEOs, wanted to showcase Kona at Lotus’ annual developer convention, Lotusphere, held every winter at Disney World in Florida, at the Swan and Dolphin auditorium. Ten thousand people attended in person. (Hard to imagine these days.)

Including, by the way, the CEO of IBM, Lou Gerstner, and his directs.

We had great plans for the keynote address. We developed a script. We hired professional coaches to help us learn the finer points of public speaking. We rehearsed and rehearsed and rehearsed. Larry Roshfeld would do a brief introduction, then I would do a short demo on Windows, and then Lynne Capozzi would show the same software (“write once run anywhere,” remember?) on an NC.

Things went wrong.

First, my microphone failed. In front of this ocean of people I had to switch lavaliers: talk about embarrassing! (These days I tell people I’ve never been afraid of public speaking since; nothing that traumatic could ever happen again!).

But that wasn’t the worst.

In front of all those customers and executives, the NC crashed during poor Lynne’s demo. She handled it with remarkable grace and as I recall she rebooted and was able to complete the demo but talk about stress!

Bill and I

Now as competitive as Lotus and Microsoft were on the desktop, there were, surprisingly, areas of cooperation. For a time, the primary driver of Windows NT server sales was Lotus Notes, and so (again, for a very brief time) it behooved Microsoft to make NT work well with Notes.

And so Jeff, me, and several Notes developers hopped a plane – the IBM private jet, no less! – for a “summit conference” with Microsoft.

We spent a day in Building 8, then where Bill had his office. It was not my first time at Microsoft – I’d been there many times for briefings – but it was to be my first meeting with Bill. After several NT presentations he joined us during Charles Fitzgerald’s talk on Microsoft’s version of Java, called Visual J++ (following the Visual C++ branding). I’ll have more to say about J++ in a minute.

This being my space, I asked a lot of questions, and had a good dialogue with Charles. (I had more conversations with him over the years and always found him to be brilliant and insightful; read his blog Platfornomics, it’s great.) At one point, however, Bill leaned forward and pointedly asked, “Do you mean to tell me you’re writing serious apps in Java?”

To which I replied, “Well, yes.”

“You’re on drugs!” he snapped.

Thus ended my first interaction with the richest man in the world.

Launch

Nevertheless, perhaps because of IBM’s enormous leverage in the marketplace, customers expressed interest in Kona and we got a lot of positive press. Many resonated with the idea of networked applications that could run on a diverse set of hardware and operating systems.

And we were blessed with a superior team of technically talented individuals. Doug Wilson, Alex Morrow, Reed Sturtevant, Jeff Buxton, Mark Colan, Michael Welles, Phil Stanhope, and Jonathan Booth were just some of the amazing, top-tier folks that worked on Kona.

Kona.

As we drew closer to launch, the marketing team started thinking about what to officially name this thing. I – and actually most of the team including the marketing folks – favored Kona: slick, easy to remember, resonant with Java.

We couldn’t, for two reasons.

One: Sun claimed, by virtue of its trademarking of the Java name, that it owned all coffee-related names and they’d take us to court if we used “Kona.” I was incredulous. This was nuts! But we didn’t want to go to war with an ally, so…

Two: it turns out that in Portuguese “Kona” is a very obscene word, and our Lisbon team begged us not to use it. We all were forced to agree that, unlike Scott McNealy’s, this was a fair objection.

The marketing team came up with “eSuite,” which, truth be told, I hated. But I understood it: rumor had it that IBM, our new parent, had paid their advertising firm tens of millions of dollars for their internet brand, which centered around the use of the letter “e” — as in eCommerce and e-business. (Hey, this was 1995!) So our stuff had to support the brand. I guess that made sense.

So What Went Wrong?

eSuite was a beautiful, elegant set of applications created by an incredible team of talented developers, designers, testers, product management, and marketers. So why did it ultimately fail? Others may have their own explanations; these are mine.

Microsoft Got Java Right, None of the Others Did

Paradoxically, the best Java runtime – by far – was Microsoft’s. Sun had written a Java runtime and AWT for Windows but it used a high-level C++ framework called Microsoft Foundation Classes (MFC). MFC, which itself abstracted a lot of the complexity of the underlying windowing and input systems, among others) was great for building business apps (it was the C++ predecessor to Windows Forms, for the initiated). But it was absolutely wrong for platform-level code – the AWT on MFC was an abstraction on top of an abstraction: as a result, it was sssslllooowww. Similar story for Apple, and, believe it or not, for Sun workstations.

Microsoft on the other hand rewrote the Windows version of the AWT directly to Win32, in effect, to the metal. Hence it was way faster. And it re-engineered a lot of other areas of the runtime, such as Java’s garbage collector, making it faster and safer. Not only that, J++, as Microsoft’s version was called, was integrated into Microsoft’s IDE, Visual Studio, and took advantage of the latter’s excellent development, editing, and debugging tools – which no other vendor offered.

I attended the first JavaOne convention in San Francisco. Microsoft’s only session, which was scheduled (probably on purpose) late on the last day, featured an engineer going into these details in front of an SRO audience.

I remember thinking: okay, if you want the best Java, use Windows, but if you’re using Windows, why wouldn’t you just use Office?

Security

Now in fairness, the Java team was very focused on security; I mentioned the sandboxing notion that the applet environment enforced, which has since become a common paradigm. They rightly worried about applets making unauthorized accesses to system resources, like files (a good thing), so at first any access to these resources was prohibited. Later, in v1.1, they implemented a digital-signature-based approach to let developers create so-called “trusted” applets.

But that wasn’t all.

In effect, on load, the runtime simulated execution of the applet, checking every code path to make sure nothing untoward could possibly happen.

Imagine: you load a spreadsheet applet, and it simulates every possible recalculation path, every single @-function. Whew! Between network latency and this, load time was, well, awful.

The Network Computer was DOA

So, if you only want to run a browser, and you don’t need all the features of an operating system like Windows, you can strip down the hardware to make it cheap, right?

Nope.

I remember chatting with an IBM VP who explained the NC’s technical specs. I tried telling him that eSuite required at least some processing and graphics horsepower underneath, to no avail. In fact, as I tried to point out, browsers are demanding thick-client applications requiring all the capabilities of a modern computer.

(Chromebooks are the spiritual descendants of NCs but they’ve learned the lesson, typically having decent processors and full-fledged OSs underneath.)

Sun and Lotus Had Different Aspirations

In a word, Lotus wanted to use Java as a way to fight Microsoft on the office applications front. Basically, we wanted to contain Microsoft: they could have the OS and the development tools on Intel PCs, but we wanted a cross-platform applications that ran on Windows and everywhere else — which we believed would be huge competitive advantage against Office.

To achieve that Lotus needed Sun to be a software development company, a supplier – ironically, to behave a lot like Microsoft’s developer team did with its independent software vendors (ISVs) in fact, with tools, documentation, and developer relations teams.

Sun (as best as I could tell) wanted to be Microsoft, and its leadership seemed to relish the idea of a war (the animosity between Sun CEO Scott McNealy and Bill Gates was palpable). Sun couldn’t care less about allies, as the silly little skirmish over naming proved. But it clearly didn’t understand the types of applications we built, and certainly didn’t understand the expectations users had for their apps. Instead Sun changed course, focusing on the server with Java-based frameworks for server apps (the highly successful J2EE).

Perhaps somewhere along the line it made the business decision that it couldn’t afford to compete on both server and client – I don’t know. In any event the decline of the applet model opened the door to JavaScript, the dominant model today.

Eventually, and tragically, Microsoft abandoned Visual J++ and its vastly better runtime. Why? Some say that Microsoft’s version failed to pass Sun’s compliance tests; others, that Microsoft refused Sun’s onerous licensing demands. In any event, there was a lawsuit, Microsoft stopped work on J++ and some time later launched C#, a direct competitor to Java which has since surpassed it in popularity.

ActiveX

Not to be outdone, Microsoft introduced its own components-in-browsers architecture, called ActiveX. Unlike Java, ActiveX did not use a byte-code approach nor did it employ the code-simulation security strategy that applets had. As a result, ActiveX’s, as they were called, performed much better than applets — but they only ran on Windows. But the FUD (fear, uncertainty, and doubt) ActiveX created around Java applets was profound.

Lotus’ Priorities Changed

Lotus/IBM itself deprioritized its desktop application development in favor of Notes, which was believed to be a bigger growth market. Much as I admired Notes (I’d worked on it as well) I didn’t agree with the decision: Notes was expensive, it was a corporate sell, and had a long and often complicated sales cycle. I never believed we could “win” (whatever that meant) against Microsoft with Notes alone.

It was true that early on Exchange lagged behind Notes but it was also clear that Microsoft was laser-focused on Notes, so our advantage could only be temporary.

Someone told me that “Office is a $900 million business, SmartSuite is a $900 billion business, why fight tooth and nail in the trenches for every sale?” My mouth dropped open: why abandon almost a billion-dollar revenue stream? (Office is now around $60 billion in annual revenue, so staying in the game might have been good. Yes, hindsight.)

eSuite Was Ahead of its Time

Today, productivity applications in the browser are commonplace: you can run Office applications in browsers with remarkably high fidelity to the thick client versions. Google Docs offer similar, if more lightweight, capabilities.

Both of these run on a mature triad of browser technologies: HTML, JavaScript, and CSS. And the PCs and Macs that run these browsers sport processors with billions of transistors and rarely have less than 8 gigabytes of memory – hardly imaginable in the mid-1990s.

And eSuite depended upon secure, scalable server infrastructure conforming to broadly accepted standards, like authentication, and high-speed networks capable of delivering the apps and data.  

All that was yet to come. Many companies were yet to deploy networks, and those that had faced a plethora of standards — Novell, Lanman, Banyan, and so on. Few had opened their organizations to the internet.

eSuite’s Legacy

I hope you’re getting the idea that the era of eSuite was one of rapid innovation, of tectonic conflict, competition, and occasional opportunistic cooperation between personalities and corporations, all powered by teams of incredibly skilled developers in each. The swirling uncertainties of those times have largely coalesced today into well-accepted technology paradigms, which in many ways is to be applauded, as they make possible phenomenally useful and remarkable applications like Office Online and Google Docs (which, I’m told, is now called “GSuite”). In other ways – well, all that chaos was fun.

I wonder sometimes if eSuite might have seen more adoption had Lotus simply stuck to it more. To be fair, IBM, which had originally promised to remain “hands-off” of Lotus, increasingly focused on Notes and its internet successor, Domino; I’m guessing (I was gone by this time) that they saw Domino as their principal growth driver. Desktop apps were more or less on life support.

Still, by the early 2000s the concepts of web-based computing were becoming better understood: the concept of web services had been introduced; PC’s were more capable, and networks standardized on TCP/IP. Who knows?

Timing, they say, is everything.

Composability and Events

Apparently one of the new buzzwords is composability, meaning everything from reorganizing (“pivoting”) your business quickly in response to changing market conditions to adding new technical capabilities to your applications as needed. As new features come online, the story goes, you should be able to seamlessly (that word!) add them to your applications as you need them, and ditch the ones you don’t need any more.

Now, let’s see, where O where have I heard this story before? DLLs, Java Applets, ActiveX, Enterprise JavaBeans, Service-Oriented Architecture, Service Provider Interfaces, the API Economy: it seems like every few years we have to rediscover how utterly cool modularity and (if we’re really chic) loose coupling are.

Technically, composability appears to mean something like a combination of SPIs and APIs. Microsoft touts the fact that it’s easy to add a FedEx module to Dynamics to enable shipping when it absolutely, positively has to be there overnight.

Cool.

Real composability, it seems to me, means a near-infinitely malleable product whose behavior can be adapted to any reasonable need.

How do you do that? (What does that even mean?)

Of course part of the answer involves a good, solid set of APIs to an application, documented, hopefully, with OpenAPI (nee Swagger) or something similar. Enough has been written about Why APIs Are Good that I’m not going to repeat their virtues.

But what about when you want to change, or augment, or even replace the core processing of an application feature? Well, of course many applications support events so you can know when they’re about to do something, or when they’ve done something.

But back in the day doing Lotus 1-2-3 my team and I decided we needed something more powerful. Our scripting language team (LotusScript) was demanding deep access to the product internals, and our addons like our Solver, even deeper ones. They needed to execute code in some cases before the relevant application code, in some cases after, for example, sideloading a file needed by the addon. And in certain cases – for example, loading a file type not supported by the original app – they needed to replace the existing code.

We had a pretty comprehensive set of APIs. But they didn’t solve the problem.

The Problem

Here’s the core idea: imagine a file load routine (this is pseudocode, so don’t get upset):

Pretty straightforward: parse the file extension and pass it off the right handler. No worries.

But what if you want to load a PDF? Or a text file? Or an MP3, for whatever reason? (Hey why not?)

Introducing the Event Manager

The idea of our Event Manager was simple: an addon could register for an event that happened before the core code ran, and/or an event that ran after the core code. In addition, the addon could return one of three values:

  • Ran successfully
  • Ran successfully, and bypass core code
  • Error

In other words, something like this:

Here you can see the first thing that happens is any addons that have registered for the “OpenFile” Before-Event get notified, and can either ignore, augment – or replace – the core handling, and thus can load a wholly new file type, if desired. (EventManager.BeforeEvent() fans out the event to all registered addons.)

The After-Event has less options, for obvious reasons. It can be used for logging, or can be used to (say) load a shadow file (as many of the 1-2-3 addons did). In this case the addon has to handle any errors that occur as the core code may not understand the addons’ semantics.

Value

We found this pattern very useful in 1-2-3, so much so that I ported the concept to Lotus Notes some time after. In some ways, I think, this provides a good benchmark of what composability should really be.

Loyalty and Competence

A recent document allegedly leaked from the Kremlin accuses the Russian hierarchy of being based upon loyalty, not professionalism. “Accordingly,” the author writes, “the higher the level of leadership, the less reliable information they have.”

This raises some interesting questions: shouldn’t, after all, an organization have an inherent basis in loyalty across the levels of the hierarchy? If so, which is more important, competence (or professionalism) or loyalty?

Let’s spend a moment examining this dichotomy. I’ll posit – because I’ve seen them – in business there exist loyalty-centric organizations and competence-based organizations. Each has their merits, but each has serious weaknesses.

The Loyalty-Based Organization

Upon ascending to the American presidency, Donald Trump famously asked his staffers to swear their personal loyalty to him. Whether this was because he felt insecure in his new role, or threatened, or because he had some other motive will likely never be known.

Similarly, in the military, loyalty is a mandate: follow orders or people die.

Every manager wants his or her teams to have some amount of personal loyalty; that’s only human. Loyalty-based organizations take this to an extreme, however: the most loyal get the biggest raises, the juiciest assignments, and so on.

Still, such organizations have advantages. For example, a manager’s wish is followed – quickly – to the letter, which can be very satisfying (for the manager), and such organizations as a result often develop the reputation that they “get things done.”

However, there are some obvious downsides. A manager may hire less competent individuals – or favor them — if he or she deems them loyal, which results in the overall organizational capability to be lowered. Moreover, highly skilled employees will often recognize the existence of a clique – and leave. The work product of such a team will not infrequently be mediocre.

The Competence-Based Organization

At the other end of the spectrum, competence-based organizations place the highest values on skills, knowledge, and professionalism. The driving factor in such organizations is not coming up with an answer, but rather the best answer – often, regardless of how long it takes or whose feelings get hurt along the way.

Competence-based organizations typically seek employees with the highest degrees, with the most accomplishments, but often have trouble keeping them; who wants to stay in a place where analysis takes precedence over accomplishment, where argument is the order of the day? Moreover, what manager wants to stay where employees have no respect or loyalty?

The Ideal

Obviously, organizations should strive for some balance between the two; it’s vitally important for teams to distinguish the relative values of competence and loyalty and strive to create a corporate culture that supports both, one in which healthy, animated discussion of options has its place, in which decisions are made with an open mind – but they are made.

In the real world of course most organizations swing more to one side or the other. As an employee you should know which your organization is; and as a manager, which of the two management styles you’ve created, and perhaps think about making adjustments.

So What Do You Do?

Well, your first decision is do you want to stay in this organization?

Assuming the answer is yes, then if you’re on a loyalty-centric team, it’s probably a good idea to demonstrate loyalty, perhaps by complimenting your boss (“Good idea!”) every now and then, or giving him/her credit (and maybe overdoing it a bit) during a meeting with your boss’s boss — even for one of your ideas! That sort of sucking up can be distasteful, but, hey, you said you wanted to stay.

If you’re in a competence-based organization, put on a program manager hat every now and then and see if you can drive decisions or an action plan (“I see we’ve got just five minutes left in this meeting, what’s the next step?”).

Sometimes, incidentally, what appears to be a competence-based team isn’t really — it’s just that the manager is afraid to take responsibility for a decision. If that’s the case, consider making the decision yourself (assuming you’re okay with the risk). That way the manager can feel comfortable that there’s someone else to point at if things go south (like I say, only if you’re comfortable with taking the responsibility).

Measuring the Value of Software Architecture

By Barry Briggs
[JUST A DRAFT RIGHT NOW!!]

Over the past few months I’ve been working with some old friends at the International Association of Software Architects (IASA) to try to figure out some way to quantitatively measure the value of software architecture. We’re trying to come up with answers to the following questions:

  • Why is software architecture good (i.e., why do you need software architects?)
  • How can you quantitatively assess an application or service?
  • What makes a good software architect?

These are difficult questions, particularly when you compare software architecture with other fields. For example, it’s relatively easy to quantify the value of a Six Sigma process-improvement organization: you measure time, resources required, and costs of a process before optimization, and then after, and you have a solid measurement of value – one that is simply not possible with software architecture.

Why?

Well, on a net-new project, architecture is applied at the very beginning, so it’s difficult to know if the lack of it would have made any difference. Arguably, on a rewrite of a project, one could compare against some set of criteria how much better the new version works vis-à-vis the old one – but there are usually so many other factors in such a project that it’s essentially impossible to separate out the contribution architecture makes. For example, faster hardware or just plain better coding might be the reason the new app runs faster, not the fact that the new design is factored more effectively.

The Army Barracks

Perhaps an analogy can help us tease out how to think about these questions. Software architecture is often compared (poorly) against physical, building architecture – but let’s try to make the analysis a bit more constructive (pun intended).

Consider something as mundane as an army barracks. How would we measure the quality of its architecture?

I suppose there are lots of ways, but here are mine.

First and foremost, does it do the job for which it was intended? That is, does it provide enough room to house the required number of soldiers, does it provide appropriate storage, bathrooms, and showers for them? Is it well insulated and heated? In other words, does it meet the immediate “business need?” If not – well, you certainly couldn’t assess its architecture as good in any way.

Then we could ask many other questions, such as:

  • Compliance with laws and standards, that is, building codes, Army regulations, local standards, and so on. Like business need, this one’s binary: if not compliant, no need to perform any additional evaluation.
  • How resilient is it? Can it withstand a power failure, a Force 5 hurricane or (since this is a military installation) a direct hit by an artillery shell?

  • How much load can it take? If there’s a general mobilization and much more space is needed, how many extra beds can it hold? 2x? 5x? 10x, in a pinch?

  • New workloads. The Army mandates that barracks become coed. Can the facilities be quickly adapted – if at all – to support separate sleeping areas, bathrooms, etc.?

  • How easy is it to add new features? For example, does it require a teardown to add air conditioning or can existing heating ducts be reused in the summer? How hard is it to install wi-fi hubs?

  • What about new components? Say the Army mandates that every barracks has to have a ping-pong table, which entails a building addition. Can such a thing be done quickly with minimal disruption?

  • Business continuity. Say the barracks does fall down in a storm. Are there sufficient facilities on the base – or on other bases – that the soldiers can rehoused?

  • Aesthetics. OK, maybe this isn’t a good one for a barracks, but for other types of buildings – think I.M. Pei or Frank Lloyd Wright – aesthetics drive our view of good architecture.

You get the idea, and, hopefully, the analogy. In this case the value of good design – of architecture – is readily apparent.

Assessing Software Architecture

When we think about software architecture, we can apply similar criteria.

Business Need

If the software doesn’t satisfy business requirements, then – as we said above – it by definition cannot be “well-architected.” Determining how well software meets the need, however, can be an interesting and challenging discussion. For years, software development began with requirements documents, which could stretch to tens, hundreds, even thousands of pages; and managers would simply tick off the features that were implemented. (And as often as not by the time all the documented requirements were met, the business environment had changed, and the app was behind.)

With agile development, users are much more involved in development from the start, tweaking and mid-course-correcting the product during the development process. If there is a requirements document, it represents the starting point rather than a final statement – and this is good, because as the product takes shape, opportunities always present themselves, both to users and developers.

Still, how do we assess how well the product meets the need? Of course, one way is to ask users if they have the features they need; if not, something’s obviously missing.

But that’s not all.

Every line of code, every non-code artifact (e.g., images) should be traceable back to the business requirement. If there is a feature, somebody should be using it. Monitoring tools can help track which features are exercised and which are not. (The Zachman Framework was an early approach to documenting traceability.)

This applies to infrastructure as well. As infrastructure is increasingly documented through Infrastructure-as-Code (IaC) these Terraform or ARM or CloudFormation configurations should justify their choices: why – from a business perspective – this or that instance type is required because of expected load, SSD storage is needed because of anticipated IOPS.

Standards and Compliance

Like satisfying the business need, complying with relevant standards is binary: the software does or it doesn’t, and if it doesn’t, you’re done.

Now by standards we don’t mean “best practices” – we’ll talk about those in a moment. Rather, ensuring that personal data is anonymized in order to comply with GDPR, or that two-factor authentication against a central corporate provider (such as Active Directory) is used, or that only certain individuals have administrative privileges: where such standards are in place, they are mandatory, not complying places the organization at considerable risk, and thus the system cannot be assessed as well-architected.

However, best practices can be more flexible. For example, a cloud governance team may mandate the use of a particular cloud provider, a certain set of landing zones, a particular relational database, and so on. In rare cases exceptions may be granted. Here the goal of such guidelines is intended to speed development and ease operations, by removing the need for every development team to waste time selecting the appropriate provider or service and for operations teams to learn them all.

Granting such exceptions must be intentional, that is, careful analysis should uncover the core need for the exception; it should be documented and possibly, the best practice should be updated.

Defining Your Software Architecture Strategy

As is true with best practices, the definition and importance of other aspects of software architecture will necessarily vary from organization to organization. When developing architecture assessments, organizations should consider what their goals regarding software architecture are. For example, what are the relative priorities of:

  • Application performance
  • Application scalability
  • Developer productivity
  • Business continuity, including RTO/RPO
  • Application visibility (observability) and self-healing
  • Software extensibility
  • Ease of upgrade
  • Usability (e.g., is it mundane/functional or beautiful?)

For example, for non-multi-national organizations georedundancy or multi-regional replicas may not be necessary. Others may decide that the expense of active-active BC/DR solutions is too high.

Moreover, different applications will attach different levels of importance to these criteria. For example, an intranet application that shows cafeteria menus need hardly be georedundant or be built with microservices – it wouldn’t hurt, but perhaps resources could be devoted elsewhere!

Strategy to Principles to Assessment

Having defined the organization’s strategic goals from software architecture – i.e., what is good software architecture and why it’s necessary – actionable principles can be developed. By “actionable” we mean that developers can look at them and understand what must implemented, and perhaps even how.

For example, if a key strategic goal is that applications should be extensible, then a principle – that a developer can use – is that apps should have a REST API, documented with OpenAPI or the like.

A good starting point can be popular industry principles, such as the The Twelve-Factor App originally intended to guide the development of SaaS applications but in fact is very broadly applicable (shown below, via Wikipedia).

# Factor Description
I Codebase There should be exactly one codebase for a deployed service with the codebase being used for many deployments.
II Dependencies All dependencies should be declared, with no implicit reliance on system tools or libraries.
III Config Configuration that varies between deployments should be stored in the environment.
IV Backing services All backing services are treated as attached resources and attached and detached by the execution environment.
V Build, release, run The delivery pipeline should strictly consist of build, release, run.
VI Processes Applications should be deployed as one or more stateless processes with persisted data stored on a backing service.
VII Port binding Self-contained services should make themselves available to other services by specified ports.
VIII Concurrency Concurrency is advocated by scaling individual processes.
IX Disposability Fast startup and shutdown are advocated for a more robust and resilient system.
X Dev/Prod parity All environments should be as similar as possible.
XI Logs Applications should produce logs as event streams and leave the execution environment to aggregate.
XII Admin Processes Any needed admin tasks should be kept in source control and packaged with the application.

We can learn several things from 12-Factor:

Principles Must be Easy to Understand, and Actionable

There are many ways of framing principles, of which 12-Factor is just one. What is key is that developers should intuitively understand what it means to implement them. For example, in 12-Factor, “any needed admin tasks should be kept in source control” easily translates to putting IaC artifacts in a GitHub repo.

Another common approach to documenting principles is called PADU, which stands for Preferred, Acceptable, Discouraged, and Unacceptable. PADU is attractive because it enables a range of options. For example, a “Preferred” approach to project management might be the use of an online Kanban board; “Acceptable” might be a form of Agile; use of waterfall methodology might be “Discouraged;” and using Excel for project management would be “Unacceptable.” Governance bodies (or the teams themselves) can then score themselves on a 0-3 basis and require a minimum score to deploy.

Principles Must Evolve

Organizations must recognize that owing to technical advances the principles may – and must – change over time. For example, the sixth “factor” above mandates that processes should be stateless; yet in today’s world it is increasingly possible, both from a technical and cost-effectiveness point of view to maintain state in business logic in certain circumstances.

Organizations Must Have Their Own Principles

Again, organizations may interpret industry principles according to their priorities and needs. Moreover they can – and should – add their own. For example, 12-Factor does not mention building zero-trust computing ecosystems and for many, if not most, this is essential.

Assessing Software Architecture

Having created a robust set of principles, it’s relatively straightforward to measure the degree to which a given product or service adheres to them. Many organizations use scorecards to rate software in an architecture review process, with minimum passing grades.

The Value of Software Architecture

A not-so-obvious conclusion from this exercise is that there are fundamentally three value propositions of applying software architecture strategies, principles, and assessments:

Usefulness, in other words, ensuring that the software does what its users want it to do, in terms of features, availability, and scale, to name a few.

Risk mitigation. Compliance with regulations and standards helps reduce the probability of a business or technical disaster.

Future-proofing, that is, enabling the product to grow both in terms of new features and the ability to exploit new technologies.

It’s exceedingly difficult to quantify the value of architecture (and architects), however. Yet it is intuitive that software cost estimation models such as Cocomo (Constructive Cost Model) which base estimates on line of code (specifically, e=a(KLOC)b) could benefit — i.e., improve their accuracy — by including coefficients for architectural influence.

Many thanks to Miha Kralj of EPAM Systems, Jim Wilt of Best Buy, and Bill Wood of AWS for their comments and suggestions. Errors of course are my own.

Predictions for 2022

Well, it’s that time of year when everybody writes their predictions for the year FWIW, which, given the track record of most such posts, probably isn’t much.

Here are mine … but first … a disclaimer: these are opinions which do not necessarily reflect those of anyone I work for, anyone I have worked for or will work for. Hell, with my ADD, they may not even represent my own opinions five minutes from now.

I Learned about * From That: Resolving Conflict

One of the great privileges we who worked at Lotus Development back in the eighties and nineties enjoyed was access to many remarkable events at MIT, Harvard, and other area institutions. In retrospect, some of these became legendary: Nicholas Negroponte at the Media Lab, the first public introduction of Project Athena, whose graphical interface later became the basis of X-Windows, and, ultimately, the Mac.

Perhaps the moment that stayed with me the most, however, and which I have since recounted any number of times, was a panel discussion in which Marvin Minsky (the father of AI) and Seymour Papert (co-inventor of the Logo programming language, among other things) took part.

Seymour (I don’t know if this is still true, but everybody in those days referred to everyone else, regardless of stature, by their first name; at one event the legendary Arthur C. Clarke conferenced in from his home in Sri Lanka — everybody called the inventor of the artificial satellite and author of 2001 “Arthur,” which I found a bit disconcerting) told a story of his youth, when he was a math teacher.

(A caveat before I start: it’s been over 30 years since this event took place; I may not have recalled the details precisely. But the main points — and the moral of the story — are correct.)

Anyway, it went like this:

Though born and raised in South Africa, Seymour received a PhD from the University of Cambridge, and then taught at various universities around Europe. Passionate about mathematics education, at one point in his life he returned to his homeland to instruct teachers about the “New Math,” that is, new approaches to mathematics pedagogy. (These days I suppose we’re all a bit jaded about “new maths,” since there have been so many of them.)

He went to village after village speaking in lecture and town halls, advocating for the new methodology. But he often noticed that as he spoke, slowly, but inevitably, people would quietly leave; sometimes only half the audience was left at the end of the lecture.

Finally he asked someone: why, he wanted to know, were people walking out on him?

It’s the way you present it, he was told. Western-educated people tend to resolve conflict in terms of deciding which of several viewpoints is right and wrong (or, in Hegelian terms, thesis and antithesis).

Here in the bush, however, we do things differently. We sit around in a circle round the acacia tree. One person proposes an idea. The next person respectfully acknowledges the idea and proposes some modifications.

And around the circle they go, until there is consensus.

Thus, at the end, they have a mutually agreed upon idea or plan. No one is “wrong.” No one is “right.” Since everyone has contributed, and agreed, everyone is “bought in,” to use the Western term.

I’ve used this approach countless times, and while it can be time-consuming, and does require a modicum of patience and maturity from all participants, it does work.

A Few Words about Lotus

[This post is a response to Chapter 49 of Steven Sinofsky’s wonderful online memoir, in which he talks about competitive pressures facing Office in the mid-90s. Unfortunately you have to pay a subscription fee in order to comment, so I’ll comment here instead.]

Steven,

Excellent and thought-provoking piece – brings back a lot of memories. During the time described, I was a senior developer at Lotus and I’d like to offer a couple of clarifications.

Re components: you (understandably) conflate two separate Lotus products, Lotus Components and Lotus eSuite. The former were a set of ActiveX’s written in C/C++ and were targeted at app construction. My late friend and colleague Alex Morrow used to talk about developers becoming segmented as “builders” and “assemblers.” “Builders” created the ActiveX controls, and “assemblers” stitched them together into applications. The idea persists today in the form of Logic Apps and so-called citizen developers. We at Lotus had some brilliant developers working on Components but I suspect the concept proved premature in the market.

Even more ahead of its time was Lotus eSuite, which was a set of Java applets designed to run in the browser. eSuite got its start when a developer (actually, me) ported Lotus 1-2-3 v1 to Java as an experiment; Lotus and IBM loved it because it was perceived to be disruptive against Office, which while not yet dominant threatened Lotus’s SmartSuite.

Ironically, however, eSuite ran (by far) the best on Windows and IE. I recall attending the first JavaOne, where, at a breakout session, Microsoft demonstrated its rearchitected JVM and Java libraries – vastly better in terms of performance and load time than the Sun-supplied versions. (This was partly due to the fact that where Sun built the Windows libraries on MFC – pretty clunky at the time, Microsoft wrote to Win32, essentially, right to the metal.) And, of course, the IDE, Visual J++, supported the UI nuances and superior debugging experiences that we’d come to expect. It really was, as you quite rightly say, a tour de force.

But it was clear to us at Lotus that Microsoft had mixed feelings about it all. I and several others traveled to Redmond (aboard an IBM private jet no less!) to talk with Microsoft execs about the future of NT and Java (why NT? Because at the time the Lotus Notes server was one of the key – if not the key – driver of NT sales). In a day full of briefings in Building 8 Charles Fitzgerald, then the PM for VJ++, came last, and for that we were joined by BillG, who couldn’t believe we were building “serious apps” on Java. (He told me I was “on drugs.”)

I always thought Microsoft’s abandonment of Java was a bit of a shame: I’d written an email to David Vaskevitch (then the Microsoft CTO) suggesting that Microsoft’s superiority in development tools and frameworks could be used to essentially isolate competitor OS’s – essentially wrapping Solaris in a Microsoft layer. I never heard back.

As it happened, we did ship Lotus eSuite – and it remained the case that neither Macs nor Sun workstations could compete performance-wise with Windows. (To this day I’m stumped why neither Apple or Sun didn’t try harder to make the technology competitive – it was existential.)  And JVM and browser technology were at the time still evolving, so what worked on one platform wasn’t really guaranteed to work on another (belying the “write once run anywhere” slogan).

eSuite also suffered from a particularly stupid design decision in the JVM (which I believe Microsoft was contractually obliged to implement as well). In order to prevent code from jumping out of the sandbox the JVM, on load, analyzed every single code path before launch. For an app like a spreadsheet, which has hundreds of functions, recursive recalculation, directed acyclic graph, etc., the performance hit was murderous. I recall wondering why Sun et.al. couldn’t use digital signature to implement trust but they never quite got the idea.

Anyway, the time for running productivity apps on the browser, unquestionably a great idea, hadn’t hit yet.  (It has now.)

A Little Family History

This story is about William Charles Newstead who was born around 1834 in the Norwich area of England. Sometime prior to 1840 his father, also named William, brought his family to the United States. They settled in a small town called Burke, New York in Franklin County, in the very far north of New York State on the Canadian border. The industries of the time, besides farming,  included “only a grist mill, saw mills, tanneries, asheries, starch factories, brick yards and stone quarries.” We know the rough time of the family’s arrival as the Newsteads are listed on the 1840 Federal Census.

I cannot imagine what brought them to the small village whose population at the time was about 2000 people. (Today it’s less than 300.) Perhaps the Newsteads had friends or relatives already there; or perhaps there were the promises of land grants for new settlers. In any event the Newstead family set up as farmers in the late 1830s.  William’s parents no doubt expected to raise their family in this quiet, peaceful town, watch them grow up and then enjoy their grandchildren.

But events were about to take a turn.

On April 12, 1861, South Carolina militia bombarded Fort Sumter, near Charleston, and the island fort  surrendered the next day. Thereafter newly elected President Lincoln called for 75,000 volunteers to help put down the incipient rebellion. These recruits were only expected to serve for 90 days, the thinking being that the war against the “bumpkins” in the South would be over very quickly.

Battle of Bull Run

A few months later, on July 21st 1861, when William would have been about 26 or 27, Confederate and Union armies clashed at the Battle of Bull Run, at Manassas Junction in Virginia, heralding the start of the Civil War (or as the South calls it, the War between the States).

At Bull Run, the Union forces were, to the shock of the spectators who had come to witness the event, decisively defeated; both sides recognized that the conflict henceforth would be a protracted affair. Calls for volunteers went up on both sides, and in the North recruitment periods were adjusted to three-year terms.

On September 28th, 1861, young William enlisted in the Union Army and was assigned to the 16th Regiment in one of the three-year recruitments. In those days regiments were defined by the states that they were formed in so William’s regiment was known as the 16th New York. (There were 16th regiments from other states as well.) The 16th New York was composed of volunteers from the most northern parts of New York State, including Franklin County, where Burke is located.  Regiments of the time comprised around 1000 men in 10 companies.

William probably underwent training, drilling primarily, for a few short weeks before being sent off to active duty (perhaps according to Hardee’s Rifle and Light Infantry Tactics, the standard work of the time, ironically commissioned by Jefferson Davis in 1853 and published in 1855; Davis, who was Secretary of War at the time, later went on to be President of the Confederate States of America. Hardee himself also joined the Confederate army as a lieutenant general; nevertheless the book was used by the Union as well).

Why did he join? Was it a sense of duty? Was it for the “bounty” (signing bonus) offered? Were all the boys of Franklin County joining up? Or was he an idealist perhaps desiring to help rid the country of the scourge of slavery? We’ll never know.

And the war was over slavery, much as the South for years afterward tried to claim otherwise. Even in the late 1960s when I lived in the Deep South it was still taught in school that the war was about socioeconomic issues or “states’ rights” and not about slavery (a false Yankee claim, they said); such were the textbooks of the time and the prejudices of the place.

Private Newstead joined his unit on October 5, 1861 a member of I Company, commanded by Captain J.J. Seaver, part of the famous Army of the Potomac. (Specifically: “it was assigned to the Second brigade (Gen. H. W. Slocum) of Gen. Franklin’s division. This brigade was composed of the Sixteenth and Twenty-seventh New York, the Fifth Maine, and the Ninety-sixth Pennsylvania, and was not subsequently changed during the period of service of the Sixteenth, except by the addition of the One hundred and twenty-first New York early in September, 1862.”)

Soldier of the 16th; note the straw hat

The 16th was known as the “straw hat men” because alone among Union units its troops wore straw hats – a gift apparently from a friend of the regiment.

The regiment overwintered just south of Alexandria, Virginia and did not see action until the spring of 1862. In April 1862 Slocum’s brigade, including the 16th, boarded a ship and sailed to Yorktown to take part in the Peninsula Campaign, an abortive attempt by the Union army to capture the rebel capital at Richmond. On Wednesday, May 7th, 1862, the 16th New York participated in its first battle at Eltham’s Landing (also known as the Battle of West Point) — really more of a skirmish.

The 16th retreats after the Battle of Gaines’ Mill

However, on the 27th of June 1862 the 16th New York was heavily engaged at the Battle of Gaines’ Mill — one of the bloodiest battles of the war yet not one of the better known. The Army of the Potomac went up against the bulk of the Army of Northern Virginia under the command of Robert E Lee and the result was a disaster for the Union forces. Private Newstead’s regiment lost about 230 killed, wounded, or missing in that engagement. A few days later, the 16th fought an inconclusive battle at Frayser’s Farm.

What must it have been like, leaving this idyllic village in northern New York State and thrust into the meat grinder where thousands of furious men in one uniform focused on nothing else but the wholesale slaughter of men in the other? Where the wounded lay screaming in the fields,  where doctors routinely amputated arms and legs in the most primitive of conditions, and where, after dark, crows, coyotes and wolves feasted upon the human carrion?

Field Hospital at Savage Station, June 30, 1862, following the battle
(Look closely to see the straw hats)

The 16th New York was next engaged at the Battle of South Mountain (Maryland), also known as the Battle of Cramptons Gap, on September 16, 1862. The 16th among other units was charged with dislodging a sizable Confederate force from one of three passes on the mountain. The 16th led the advance – a “brilliant dash” it was called –  and suffered 63 killed and wounded.

Battle of Cramptons Gap

History records it thus: “Though the Federals ultimately gained control of all three passes, stubborn resistance on the part of the Southerners bought Lee precious time to begin the process of reuniting his army, and set the stage for the Battle of Antietam three days later,” although the 16th did not participate in that bloody but indecisive battle. Lincoln, claiming a “victory,” announced the Emancipation Proclamation a few days afterwards.

During the mild winter of 1862 to 1863 the division participated in the famous “mud march” in which Union troops attempted to surprise the South by crossing the Rappahannock River; however because of bad weather and disagreements between the Union generals this attack failed.

Following the conclusion of the winter the 16th moved out with the rest of the division and joined General Hooker at Chancellorsville, one of the most important battles of the war and a decisive victory for the Confederates. At that battle, beginning on April 30th  — a Thursday –  and lasting through May 6th of 1863,  the 16th was positioned on the frontline on the right flank. At Chancellorsville the regiment lost 20 killed, 49 missing and 87 wounded.

Perhaps because of the heavy losses it had sustained, the 16th regiment was disbanded later in May of 1863. The so-called “3-year-men,” including William, were transferred to the 121st New York.

The 121st Infantry, under the command of Colonel Emory Upton, was thus known as Upton’s Men. Upton, who subsequently had a distinguished career, rising to the rank of Major General, would later write a book entitled The Military Policy of the United States which shaped American Army policy for decades: arguably, to this day. The 121st was part of the second brigade under Brigadier General Joseph Bartlett, of the First Division led by Brigadier General Horatio Wright who in turn reported to VI Corps, commanded by Major General John Sedgwick

Monument to 121st at Gettysburg

From June until July of 1863 the 121st was part of the Gettysburg campaign . On the evening of July 2nd 1863, following the heroic defense of Little Round Top by Colonel Joshua Chamberlain and the 20th Maine — perhaps the most decisive engagement of the entire war – the 121st was assigned to reinforce and relieve Chamberlain. It occupied the north end of Little Round Top and held this position until the end of that battle. Two enlisted men were wounded. Thereafter, from July 5th until July 24th to 121st pursued Lee to Manassas Gap Virginia.

Gettysburg was the turning point of the war. Lee, forced to halt his advance into Pennsylvania, began a retreat with his armies to Virginia. Today, on Little Round Top there stands a monument to the 121st .

The 121st, pursuing Lee, subsequently took part in the Bristoe Campaign, a series of bloody battles fought in Virginia during October and November of 1863. 

Somewhere in all of these fights young William was wounded, losing a finger. Perhaps because he was no longer able to shoot — it was his right index finger — he was discharged on October 5th, 1863, just short of the full three years. From the gore and agony and horror of the battlefields he returned to quiet Burke, where some time later he married Amanda Esterbrook, an orphan 10 years his junior from the next town over. It’s hard not to wonder how all his wartime experiences colored the rest of his life.

William’s Record in the US Civil War Pension Index

William’s parents were lucky enough to see him return; indeed, they did not pass away until the 1890s. William and Amanda had six children including a daughter, Eva Melinda, who moved to Boston to marry one Patrick Hansbury, in 1893.

Main Street, Burke, New York (date unknown; probably 1920’s)

William’s obituary

William died in 1896 at age 62 of “epilepsy,” according to his obituary, just a few years after his long-lived parents. After his death Amanda moved to New Hampshire to live with another of her daughters, Emma. Amanda collected William’s veteran’s benefits until she passed away at age 79 in 1922, in Litchfield. She was survived by 18 grandchildren and two great grandchildren.

Eva and Patrick lived in Newton, Massachusetts and had a large family. The eldest, a daughter named Estella, said in later years that her father “treated his daughters like sons, taught them how to do everything.” A stableman, he died in Plymouth in 1924 during a ferocious storm trying to save the horses in his charge (eerily reminiscent of an incident a year prior, when the Newton Journal chronicled the actions of another of Patrick’s daughters, Delia: “Leading ten terror stricken horses through the smoke of a burning stable, Miss Delia Hansbury early Saturday morning saved the lives of every one of the horses belonging to the riding school of her father, Patrick J Hansbury, …”) Eva died in 1939, aged 63, from complications from an appendectomy.

Estella would grow up and marry  (on June 10th, 1922, in St Paul’s Cathedral in Boston) Lester Briggs, himself a Purple Heart veteran of World War I and a photo lithographer. They had two children, Wilbur and Kenneth. Lester died of colon cancer in 1946; Estella lived a long and independent life — for years she ran a cosmetics shop in Milton, Massachusetts — passing in 1987 at the age of 92.

Their oldest son, Wilbur, fought as a fighter pilot in World War Two. Upon his return he attended college at Boston University where he met his wife, Elizabeth Bowen. Wilbur and Elizabeth (“Bill” and “Betty”) had two children themselves, Barry and Geoff.

To all the Briggs kids reading: you know the rest; and now you know where you come from –  at least one small part of it.  Know, as well, that every one of these people I’ve written about is proud of you!

Postscript:

There’s more to our story:

On October 13, 1862, a nineteen-year-old Albany man and immigrant from the Hesse-Darmstadt region of Germany named Valentin Ahlheim enlisted in the 177th New York Infantry “to serve nine months.” The 177th fought in the Western theater of the war, at McGill’s Ferry, Pontchatoula, Civiques Ferry, and “fought gallantly” in the final assault at Port Hudson in Louisiana, the longest siege (48 days, days, called at the time “forty days and nights in the wilderness of death” ) in US military history to that point.

The 177th was mustered out in September 1863; somewhere in this time Valentin was transferred to the 21st Independent Battery NY, which was involved in the same battles; Valentin remained with the unit until the end of the war.

Valentin’s cousin Elizabeth, a few years younger, married an Albany farmer named August Meyer, also a German immigrant. They had seven children; their youngest daughter, Augusta Henrietta, married Albert Edward Bowen on July 10, 1909. Their youngest daughter, Elizabeth, married Estella’s son Wilbur in 1950 – and again, you know the rest.

Images:

Fort Sumter: Currier & Ives, https://www.loc.gov/pictures/resource/cph.3b49873/  public domain, via Wikipedia

First Battle of Bull Run: chromolithograph by Kurz & Allison, 1880 public domain, via Wikipedia

Straw Hat NY 16th Infantry http://www.framingfox.com/16newyoincp1.html

Retreat from Gaines’ Mill https://en.wikipedia.org/wiki/16th_New_York_Volunteer_Infantry_Regiment#/media/File:Retreat_from_Gaines’s_Mill.jpg

Field Hospital at Savage Station VA, June 30, 1862 https://en.wikipedia.org/wiki/16th_New_York_Volunteer_Infantry_Regiment#/media/File:After_Battle_of_Savage’s_Station.png

Cramptons Gap https://dnr.maryland.gov/publiclands/PublishingImages/SouthMountain_Battle-of-Cramptons-Gap-large.jpg

121st Monument at Little Round Top https://en.wikipedia.org/wiki/121st_New_York_Volunteer_Infantry#/media/File:121st_regiment_1.jpg

Burke, NY http://freepages.rootsweb.com/~tollandct01/genealogy/gfentonancestry.html (links on this page are broken but here is the direct link to the photo: http://freepages.rootsweb.com/~tollandct01/genealogy/burkeny.jpg )

William’s Pension Record https://www.ancestry.com/imageviewer/collections/4654/images/32959_033013-00751?usePUB=true&_phsrc=hwQ403&usePUBJs=true&pId=11890614

William’s obituary, courtesy of Tammy Traster on Ancestry.com https://www.ancestry.com/mediaui-viewer/tree/5601189/person/316626918/media/d168aaef-778e-4307-9a00-b1306c320175

Muster record for 177th NY: https://dmna.ny.gov/historic/reghist/civil/MusterRolls/Infantry/177thInf_NYSV_MusterRoll.pdf Valentin listed on page 82 (about 2/3 down the page). Also see https://dmna.ny.gov/historic/reghist/civil/rosters/Infantry/177th_Infantry_CW_Roster.pdf page 1081.

History of the 177th:
https://dmna.ny.gov/historic/reghist/civil/infantry/177thInf/177thInfMain.htm