Sitting in my bag just a few feet away is a card I had planned to write to Herr Professor Peter Liermann. I’m not a spiritual person, but the universe had been begging me for a while to let him know the contributions he made to my life. It will be one of my great regrets that I failed to do so.
Yesterday I found myself in a German bar in Seattle, lifting a Pinkus Mueller. An hour later I received the message from a classmate telling me the news of his loss. It’s funny the way the universe led me there to hoist one in his honor, like he and I had done at Einspruch bar in Münster many times twenty years prior.
I’m not someone who talks about heroes or role models, but at a time in life when I needed it, Peter was a father. He challenged and molded my Weltanschauung on politics and society, in language and relationships. The time I spent at the Westfälische-Whilhelms Universität under his tutelage was my formative college experience. To this day, my internal monologue when speaking or reading German is in his voice. I’ve had people ask “Did you grow up in Germany?” Perhaps not chronologically, but practically, and he played a significant role in that.
There was a student who decided to not go to Münster with us, because she found Peter a challenging curmudgeon. I’ve reflected so many times on what she missed. Yes, the one time I used the polite “du” form of the language with Peter I was scarred by his gaze. Yes, I spent many hours in front of a mirror practicing how to say “fünf” and “Deutschland” at his chastising. But, I came to know him as a deeply caring person who accepted our foibles as young people and sought to provide us guidance we were too young to know we needed.
Each student of German at Luther College was expected to complete a brief oral dissertation to qualify for their minor or major. After our assigned time, we compared notes, with many students reporting Peter had engaged them in casual topics like what they intended to do after graduation. Of course he asked me to speak upon the role of the forthcoming Euro and its role in European economic integration. I liked to think of it fondly as his revenge for my frequent misbehavior.
So many of his stories have stuck with me, such as blowing bubbles in a gas mask issued in a bomb shelter as World War Two raged above. Or, his first experience eating wheat bread instead of Roggenbrot upon reaching the United States. To think he wrote the definitive thesis on Konjunktiv Eins makes my head hurt considering the laborious effort. He was the first person I met to declare “I am a socialist.” Anytime I see a train I think of him and thanks to him I know the contexts that give cause for the use of the longest German swear word.
I remember going to hear Professor Jessica Paul, his wife, perform a piano concert, and seeing Peter’s beaming face as he watched. There was something I liked about the way they looked at each other and the way I heard them speak about each other. Now, with another twenty years of life experience and a failed marriage, I can recognize the depth of respect and mature love between them. As a parent, I can put in perspective now the understated pride Peter voiced for his children.
Peter once stated simply that his goal was for us to develop a lifelong appreciation for the German language and culture. Peter hat mir das aber noch viel mehr gegeben und dafür bin ich dankbar. He will be remembered and missed.
“The world around us is changing, and government must adapt with these rapidly evolving times,” said Sen. Hertzberg. “California needs to continue our legacy of taking on new and developing technologies, especially ones like blockchain, which is being embraced worldwide and presents a strong level of security that is resistant to hacking.”
I’ve been thinking about my experience as both a software developer and a hiring manager. Interviewing and hiring practices have become such a ridiculous game that there are books and sites dedicated to “cracking the code.” There are even companies outsourcing their interviewing to other companies because they feel they aren’t qualified to do it themselves. It seems to me that everyone is going about this the wrong way.
Once upon a time, the intent was to find people who were good at solving problems, as fundamentally that’s what we’re doing via software. Puzzles were often used as discussion aide, but along the way the industry lost sight of their purpose; not to know how many gas stations there are in Houston or why manhole covers are round, but to demonstrate critical thinking. Then the emphasis shifted to coding tests, which are akin to memorizing the phone book. A good party trick, but doesn’t actually tell us anything about this person’s abilities beyond rote memorization, in a single area of a professional software developer’s skill set.
So here are some observations about what’s not working and how to fix it.
Observation 1: Publish salary / compensation range, so potential employees know if it is even worth their while. If you are going to ask potential hires to engage in an arduous process, the economics should be made clear.
Observation 2: Asking to see a “portfolio” or code examples is ridiculous. We’re software developers, not writers or artists, so calling it a portfolio is flawed unless the role has a significant visual or artistic component. Even then it might not be appropriate, as the visual design itself might not be the work of the developer. Plenty of code generating billions of dollars has no user interface. Much of the code we write as developers is not our own – it is protected by intellectual property laws, and is not ours to share with future employers. You wouldn’t want us showing code we wrote for you to other companies, would you? Further, anything complex enough to impress me as a hiring manager will take too much time out of my schedule to assess. It’s akin to a writer showing me a dictionary: words, but lacking context.
Observation 3: Take home tests / projects likely aren’t going to achieve what you want. First, if you haven’t told me anything about the compensation, then why am I incentivized to spend my time jumping through your hoops? Then there is the reality that you have no guarantees that it was actually your candidate who completed the work.
Observation 4: The all-day interview process. In some cases I’ve heard of companies that are now asking for multi-day interviews. This puts a tremendous burden on the candidate, plus is often a test of stamina and trivia as much as a valid showing of the candidate’s abilities. It’s also dependent on the interviewing skills of everyone on a candidate’s loop.
Observation 5: The difference between a good co-worker and a bad one seldom comes down to whether they can solve FizzBuzz. It tends to be about:
- Whether they are good at patterns and abstract thinking
- Whether they can stay on task and manage their work
- How they approach working with other people, especially across skill areas
- Their ability to evolve into new challenges and technologies
Fundamentally if you just evaluate on the basis of simple coding skills, you get the proverbial undifferentiated “code monkey.” Is that what you want to hire?
Observation 6: Some people are really good at interviewing, but weak at work.
Observation 7: Sometime a really excellent candidate is just not good in a particular environment. For example, I once hired someone who excelled at getting things done in a highly-politicized corporate environment, but couldn’t keep up with the lean and mean pace of a startup. It wasn’t that he wasn’t a capable professional, it just wasn’t the right fit.
Observation 8: Sometimes companies misrepresent themselves in ways that set up candidates to fail. For example, many years ago I interviewed at a company under the guise of doing Java development. Somewhere in the conversation it became clear that they really needed a lot more Perl. At another company, same kind of thing happened with Smalltalk and Java. Point being, the companies misrepresented the work environment, the nature of the projects and work, and the technologies at play. This is a sure-fire way to end up with a disgruntled employee, which isn’t beneficial to the company either.
You put all this together and it become obvious that the only way to select candidates is to actually see them do the job. The proof for all parties is in the pudding, and I propose this could be accomplished in one of two ways:
- Rather than an all-day interview loop, instead have them do the job for a day. That is, have them be an active participant. They come to the meetings, they work on assigned responsibilities. Perhaps they pair-program on actual tasks, as a way to get them integrated into the group. This is the only way to reliably see how they will function inside your organization. If you subscribe to the sentiments that “people are our greatest asset” or that effective hiring is the most important job of each manager, then it is a no-brainer to actually do this. It’s the most accurate way to get a picture of the future working relationship. They could even be given a small stipend as a reasonable exchange for their expected contribution to the workday, as you aren’t interviewing them so much as expecting them to be an active participant of the team. Yes, it would take some effort on the part of your team to pull this off, but its a very accurate way to assess the person’s actual technical fit with your team.
- If you operate in a distributed team or in some fashion that prevents you from having them spend a dedicated day with your team, then an alternative is to give them a small paid project. Give them some small amount of work, pay them for it, and observe how they approach it, how they interact with your team, and the resulting work product.
In both cases, overall hiring risk is reduced. The candidate knows in advance whether it is worth their time to pursue the opportunity. By integrating them with the team, you are able to assess their technical abilities, their comprehension of the problem domain, and their ability to interact effectively with your team. You know whether they can accomplish the work because you’ve seen it first-hand.
Angular and React are popular frontend frameworks, backed by Google and Facebook respectively. I don’t recommend them.
Angular and React are fine and much like the old adage that nobody got fired for buying IBM, you’ll be politically safe using either one. Vue.js is the frontend framework I’ve wanted since the late 1990s. Take this example of a single file component:
Straight HTML, Typescript, and CSS. Separation of data from presentation. Simple organizational structure. Trivial for any developer to learn. This is the magic of Vue.
Angular in particular is making significant in-roads in corporate settings, viewed as the “enterprise grade” choice. While there is nothing inherently wrong with it, its very complex and in comparison to Vue, needlessly so. JSX, while not required for React, is a significant step backwards. React appears simple on the surface, but quickly falls into the same complexity trap as Angular.
Fundamentally what we want in a frontend framework is a way to encapsulate reusable components. We want to be able to do this using standard languages. We want to make it possible for developers to build actual functionality, quickly. Vue does all of this in a superior fashion to any frontend framework I’ve used in the past twenty years of development.
This is an old story about Java, but conceptually applicable to whatever you might be using. About 2008 I got the call that a nationally recognized car rental firm was struggling with the performance of their quoting system. A team had spent about a month trying to get the application to work, with no luck. They would run a few transactions through and watch as CPU and memory spiked, never getting more than a few requests to respond in any way.
The first thing I did was realize that their basic install was broken. None of the team actually knew anything about application development; they were fundamentally all systems administrators, focused on incantations to make the operating system and network stack hum. First lesson: Make sure you are actually performing a valid test in a valid environment.
Once we started generating traffic against the application, a whole slew of apparently innocuous errors appeared. Back in the day, I’d walk into a typical environment and be greeted by thousands of seemingly innocent exceptions piling up in some log file. JNDI, which for the uninitiated you can think of as a configuration store, was a frequent culprit as someone would forget or more likely fail to even attempt to configure it correctly.
This quoting system was overly simple, as most of its functionality was about queuing up calls to send to the real action, somewhere back on a mainframe. So with few actual transaction types, it was even easier to see that more time was being spent waiting on JNDI than on actual functionality. Because JNDI lookups were broken, 250ms was being added to every transaction, waiting for a lookup to fail and an exception to be thrown. To make it even worse, the value being referenced through an expensive lookup was really just a static value anyway.
Two lines of code later, I’d shaved that time off every transaction. Isn’t it amazing how minor flaws can be fixed to major benefits?
A major North American bank had a problem with its commercial loan application. The application was used by lenders to look up credit information about businesses, calculate rates, and process loans. At one point in the process, one of the pages would occasionally simply go blank. Rumor was that in some cases, if left open, it might return some of the needed data after a significant delay. In any case, it was damaging the bank’s abilities to service customers.
This had been happening for over a year when I was called in. What stood out was that the overall performance of the application wasn’t terrible. The codebase, or at least what I got to see of it, wasn’t great; about average for being offshored to the cheapest vendor. Lots of copy-paste code, half-working build system, a mis-mash of dependencies. The usual. My primary technical contact was a bright guy who knew the major technical and political inner-workings of the bank. But, he was stumped.
About the only pattern was the number of sloppy errors. For example, in the network captures there were a number of HTTP 404 errors. Generally, these would be considered innocuous; a request was made for a file, but it wasn’t found. The development team had parroted this response for many months, that these were just simple errors with no impact on functionality.
Something so small had cased a year’s worth of angst and incalculable business harm.
Once upon a time I got an urgent call from a nationally known auto parts company. They described significant performance degradation in their online product catalog application, used in all of their stores. Upon arriving on site, meetings were held across the organization, representing all the stakeholders. This was all very normal, but I noticed that there were few technical people made available, nor could any actual users corroborate the described degradation. In fact, about the only quantifiable information made available was from a technical director, sharing network data. It was also the only group that had any kind of analysis tooling in place and was the only group with any real longevity with the company.
Generally in an application performance triage effort, you get an early sense of the scope and shape of the problem. By the end of the first day, you’ve begun gathering together real data, or you’ve at least formulated a plan for deploying tooling to get the data. By the end of day one on this project, I’d met with multiple Directors and Vice-Presidents, all of whom regurgitated the company line that the catalog application was slow, but almost none of whom had any personal experience using the application.
Something didn’t seem right.
So I went to the project sponsor and I spelled it out: “Who am I here to fire?”
The CIO had overpromised a major project and was in trouble with line-of-business executives. That CIO and the management team they brought with them were likely to lose their jobs if they couldn’t deliver. But what if there was a major catastrophe? An unavoidable distraction away from this pending project failure? The product catalog, along with the network team, had been selected as the disposable scapegoats.
In two weeks of work, no meaningful technical problems were found, and our team was railroaded out of there quickly once it became clear to the project sponsorship that they were at a very real risk of being exposed.
An application breaks in the middle of the night, someone from your technical team rushes to the rescue. They are up all night performing incantations and miracles to minimize data loss and restore service to your users. They are the heroes of the hour, in some cases literally saving the company.
Or are they the villains?
- Didn’t use application monitoring tools to proactively watch for issues.
- Didn’t set up automated deployment tools, to reduce human error.
- Didn’t have controls in place over their configuration.
- Didn’t do any kind of performance testing.
- Didn’t size systems appropriate to actual workloads.
- Let logs fill up, domains and certificates expire.
- Used the latest fad technologies without consideration for the business.
As a consultant, I regularly walk into situations where the supposed technical hero of the organization is actually a villain. More often than not, it falls into one of these two buckets:
1) The Peter principle: Someone was given more responsibility than they are actually competent to handle. Easy to happen if you have non-technical management over technical talent.
2) A kingdom builder: Someone technical who enjoys the power in being a gatekeeper for the organization.
An incident post-mortem should not be about blame, but rather establishing the facts of what happened. Sometimes your technical team knows better. Especially in early stage companies with limited funding, corners get cut. It’s critical to listen to your technical team when they suggest areas of business risk. It’s also critical to ask them to identify those areas so that they can be preemptively addressed, or at least identified. Learn whether your heroes really are, and if they aren’t, take action to improve your organization.
In working with startups / early-stage ventures, I’ve noticed a terrifying trend: complete lack of control over their own intellectual property. Examples included:
- Lack of controls over passwords and certificates
- Lack of knowledge about the location of servers and hosting
- Lack of access to source code / source control
- Lack of knowledge of build processes / technical operations
Some of these companies don’t even have definitive ownership over their own logos and branding! In each recent case, a relationship went bad (employee, agency, or contractor) and I was called in to assess. These were small but profitable and growing businesses. In each case, management / ownership had at most a chain of emails documenting any of the above assets.
It’s easy to discard these technical assets as something for your development team to own. If that team is offshore, for example, you could be looking at a very expensive, if viable, legal process to recover them. If all information is held by a single individual, you are at risk of the proverbial “hit by a bus” issue, whereby if that individual becomes inaccessible, you many run significant business continuity risk.
Steps to protect your investment include:
- A member of the management / ownership team should create all initial accounts on cloud services.
- All accounts should be established with a company email address.
- Certificates, passwords, and similar credentials should be stored in a shared password keeper.
- All accounts should be created using company payment instruments.
- All legal contracts need language establishing your ownership over your assets and a path to reclaim any under the physical control of contractors. (Consult your lawyer for specifics)
Intellectual property assets like source code are at the heart of your business, treat them as such.