A few months ago I revisited Conway’s law, a famous adage that says a system’s design structure mirrors its designing organization’s structure. I found that this wording, while prevalent and generally correct, is incomplete; read on for the resulting expanded viewpoint and some applications.
As Disney takes Star Wars mania to new levels, I find it increasingly difficult to remain the odd guy who’s never seen a movie or knows much about the series. In truth, it’s impossible to fully evade this cultural phenomenon, and indeed one of my favorite project/task management techniques comes from a timeless phrase by master Yoda:
Do, or do not; there is no try.
I’m a big fan of David Allen’s Getting Things Done and Michael Linenberger’s Manage Your Now methods. Crisply stating the next action for an open loop is a simple but powerful way to ensure progress. And taking a page from Yoda, I’ve made it a point to never use “try” when I’m writing down a next action or committing to something in general.
Whenever I’m tempted to write “try” in an email or task note, I pause and ask myself, “what would it take to delete’try’ here?” Am I thinking I can get more done today than I actually can? Do I need to enlist support from someone else before I can commit to this? Do I have to train or read more on a particular topic before I can act on it?
I must emphasize this is not about verbalization – it’s just a useful way to uncover hidden dependencies or break down tasks that stems from a particular way of wording them. I have nothing against the word “try”, or the concept of trying itself; exploring, experimenting, and setting stretch goals are all good things, in the right context. More often than not, though, when planning work I’ve found “try” is more of a crutch or oversimplification that you’d do well to remove as early as possible. If you say “I’ll try to get this done by Friday”, I’m not advocating you blindly remove ‘try to’ – it came to your mind for a reason! Take the chance to deep dive that reason and come up with a better commitment, even if it’s later or ends up requiring more work than you thought (which you’ll want to know as early as possible anyway).
By paying close attention to the subtle clues your word choices hide, you can improve your planning and management skills. And while it means next to nothing to me, you know what my parting line has to be here, so: may the force be with you!
Almost 21 months ago, I announced here an applied research project to explore the feasibility of using the concepts of architectural description languages (ADLs) to provide automated assistance of high-level electronics design. This was supposed to take around 12 months, but it took quite a bit longer than expected. Thankfully, I wrapped it up by last February. You can read my draft paper here, and peruse and play with the source code here. A finished version of the Eclipse-based visual editor can be found here.
Any and all comments and feedback on this are warmly welcomed!
As another year picks up steam, I’m once again reminded that “time flies like an arrow”. For instance, though it feels like it was yesterday, in February 2015 it’ll be a year since Satya Nadella became Microsoft’s new CEO. Tasked with implementing sweeping changes at the technology behemoth, some of his moves have been expected and applauded, while others have been surprising and controversial. Most of us don’t run a large company for a living, but I think there are three very basic steps that can be inferred from Nadella’s style that are worth keeping fresh in our own jobs.
Much of today’s task management issues stem from using the email inbox as a task management system. Thus far, solutions have revolved around re-educating ourselves on inbox management. Now, a couple startups (and at least one large email player) are actually rethinking the way our inbox works. As they carefully tread new ground, task management laypeople will benefit immediately, while productivity experts will initially struggle with this new paradigm.
I have recently finished exporting a GMF editor as a stand-alone Eclipse-based product, as part of ongoing research work (see here for details). I ran into a couple issues doing this, and after sorting them out I decided to write this technical note in case it helps other Eclipse plug-in/EMF/GMF/RCP developers.
Last Friday, my LG G Watch informed me a system update was ready to install. Eager to get Android Wear 5.0.1, and having already applied two updates to the watch with no issues, I installed it immediately. After a minor hiccup, the update was apparently successful, but then both the watch and my phone begun experiencing battery drain. A factory reset on the watch fixed it, but I only found out what was really going on by serendipity; here’s the story, in case it’s useful to someone else.
If you enjoy working with micro-managers, you can skip the rest of this post.
OK. Now that I have your attention, let me offer a suggestion for dealing with the micro-managers in your environment by approaching them with a genuine intention to help with –not correct– this trait. Help them deal with the impact it has on their time, as opposed of making it about the way they control their duties.
Another week, another round of high-profile tech announcements… And security woes. Apple announced its new Pay service, which may finally make digital payments mainstream. It was, however, tainted by concerns arising from “Celebgate” and the presumed role iCloud security played in it. Meanwhile, Google was busy explaining that the five million Gmail credentials recently published by Russian hackers hadn’t been obtained from their servers. Tech giants have successfully transitioned us to a cloud-based digital lifestyle, but a lot of work remains to ensure security is actually usable and effective enough.
Traffic was rather heavy as I was driving home from work today. At some point, I noticed the lane to my right was clear, whereas a few feet ahead my lane was jammed. I started changing lanes, but then the car ahead of me (which was fully stopped) attempted to do the same. As I had more room, I stepped a bit firmer on the gas, hoping the other car noticed and let me pass to its right. It worked.
As I pulled away from the jam, I pondered my rather trivial feat. Unconsciously, I had performed a flawed risk/reward analysis: for the perceived benefit of pulling into my driveway a few seconds earlier, I had risked entering a car crash — even a fender bender is annoying enough as to deny any real or perceived time benefits.
Obvious, right? Yet we do it all the time with much more critical things. I’m not talking about flawed probability percentages or delusional rewards — though those are serious problems in their own right; I’m talking about risks and rewards that are not really exchangeable in terms of units or dimensions. Thus, for the prospect of a won argument, we risk a long-term relationship. For the reward of making it to production a couple days earlier, we risk data integrity, customer satisfaction and architectural quality. For the sake of familiarity and transferred responsibility, we enter unacceptable risk as we plan and execute projects using known-flawed waterfall methodologies, with vendors that should know better.
There doesn’t seem to be much written about this, and it makes a lot of sense: risk-reward analysis originates in the financial industry, where the one ruthless unit for all measures is money. We are supposed to do that as well (make a business case or somehow else monetize much of our IT project decisions), but all too often we lack method, discipline, or both — and yet we plow ahead based on questionable proxies for actual business risk and value.
Next time I carry out a risk-reward analysis, I’ll try to make sure that both ends are measured in the same units. I hope you do too!