Ottawa Hydro’s MyHydroLink

15 07 2011

Hydro Ottawa’s new MyHydroLink data is enlightening and frightening, need to cut back on peak usage!!

Good read: Agile Ruined My Life

14 09 2010

Daniel Markham has written an insightful post on the agile development on his “What to Fix” blog: Agile Ruined My Life.

I’ve mostly stayed away from the Agile, Scrum, and XP debate since I’m not in that business anymore. I’ve always felt that this was invented before (we taught this at Rational as iterative and incremental development) and that there a lot of hype. This article points out some of these pitfalls, in particular the people who claim to be experts in the field.

Multicore article published in Electronic Component News

25 06 2010

Simplify Multi-core by Understanding Key Use Cases | ECN: Electronic Component News.

The Multicore transition: tools are key to success

2 02 2010

[From my Wind River blog:]

For embedded software companies where so much emphasis is given to supported hardware, operating systems and middleware technologies, tools can get ignored in the fray. This is unfortunate because tools are key to project success because they can make the difference between meeting a deadline or missing it by many weeks. Tools are all about developer productivity and getting problems solved faster …The Multicore transition: tools are key to success – Bill Graham.

For embedded systems, going “green” will come from customers

1 12 2009

In embedded systems and likely in other markets, the need to go “green” will come from customer demand for reduced power consumption rather corporate citizenship. This might be stating the obvious in that, of course, companies are far more motivated by money or costs than by the need to protect the environment. However, in embedded systems the move to lower power consumption is becoming a more universal concern than one specific to mobile or other battery-powered devices.

Power management has traditionally been the concern for devices that needed to have good battery life. In fact, battery life for mobile devices can be a differentiator in the marketplace. We all hate when our mobile phone runs out of power just when we want to phone home.

Power management and system power consumption are becoming more a general problem in embedded systems. Certainly, power consumption because of thermal concerns has always been a system design problem but how much electricity consumed by a system has been less so. Consider the mobile phone network infrastructure. There is huge computing power put in every base station and they consume a huge amount of power. It’s estimated that mobile networks worldwide consume 43 billion KW/h per year (roughly 2W per user). Assuming residential power rates here in Ontario ($0.08/KW/h) that is $3.4 billion per year. Power consumption in the mobile networks will become a big focus as electricity rates climb. The motivation will be mostly to reduce costs but the side benefit will be reduced electricity consumption. Companies can then claim they are going green and look good while saving money at the same time. A win-win situation.

On the technical side, what will enable this? Already we are seeing higher computing power with lower  TDP in multicore processors and supporting chipsets. This trend will continue as the customer demand for lower power intensifies. Unfortunately, the software side is behind the curve in this respect. In general, embedded RTOSs have poor support for power management. This need to change quickly to enable the power saving capabilities in the processors. Once the hardware and software capability is there, I think we’ll see an increasing use of power management in the non-traditional markets.

Why low-power Wi-Fi is so important

10 11 2009

I saw a announcement on Twitter from @EPNMagazine about Acal Technology’s new Low-Power Wi-Fi Module. I’m not singling out this device in particular but I think these new super low power wireless devices are revolutionary for embedded systems. Why? Because many embedded systems run independently of each other and disconnected from any sort of central control or dispatch system. When they are interconnected it has traditionally been wired networking and wired power. Advances in low power processors and, now, low power wireless networking, devices can now be interconnected using just battery power.

This sounds great but why is this revolutionary? It changes the installation criteria for these devices. You no longer need to wire them up, meaning no retrofitting an office or a factory to accommodate a new system. The best example I can think of is Heating Ventilation and Air Conditioning (HVAC) controls for office buildings. When new offices are built the designers guess where it makes sense to put thermostats on the walls and an electrician comes in and wires them up. After a tenant moves in and starts to layout offices and meeting rooms it’s inevitable that the HVAC system is out of whack – rooms too hot or too cold. What if you could have interconnected, wireless smart thermostats than run on batteries that could be placed wherever (and whenever) it makes sense. Not only does this cut costs in wiring and setup it likely saves many HVAC issues down the road.

In some ways, this is reality and similar products exist. But, as hardware prices drop, the functionality of these battery powered devices will increase, as will their ubiquity. When they do, it will just be a software problem to figure out how to talk to all these new smart devices online….

Multicore to Many-Core: Hardware too far ahead of software?

28 10 2009

from Tom's Hardware (

Following my last post on “Keep talking about Multicore” a few colleagues pointed out that discussing Many-Core is applicable too. I certainly don’t disagree since Many-Core processors are coming as TNBT (The Next Big Thing). I think we will see adoption into specialized markets sooner than others. In particular networking which is already comfortable with multicore will use the many core tiles to speed up specialized processing. I can see applications in other areas such as imaging and devices that need higher end processing of audio, video or graphics.

In general, as I stated in my previous post, the embedded market can be slow to adopt new technology in specific market verticals – not because they are old school but because the product timelines and planning are years in advance and product life cycles are much longer. The move to multicore is underway and many-core in the near-term will be more niche than commonplace. Why? I think because we are still coming to grips with the software complexity of multicore. Moreover, tools are still catching up to multicore, how will we debug a 100 core system? Many-core will likely be handled the way heterogenous multicore solutions are today (e.g. general purpose CPU plus DSP or GPU). Vendors might supply libraries that take advantage of many cores will leaving a small number for general purpose processing. I have heard customers say that, heterogenous systems are particulary hard to debug because they are usually supplied with two disparate tool chains. Interestingly, an article from 2007 was quite prescient about this very topic, supporting my premise that hardware has leapfrogged the software – of course, would I quote an article that didn’t?

In the near term, I think we will see 16 to 32 core chips which will be “quite-a-few-core” systems. In these cases, I think a hypervisor solution can bring sanity to the solution by allowing you to create several virtual targets in one. For example, you could create a four core controller plus 12 specialized processing engines (this is being done today for deep packet inspection). This is manageable because the application complexity is isolated to your four-core target and the specialized processing engines are identical and relatively simple.

Whatever way the many-core technology pans out, I still contend that software is behind the curve versus the hardware. And I still won’t stop talking about multicore…

Don’t Stop Talking about Multicore

23 10 2009

I keep telling anyone who will listen (and some who don’t really want to) that product managers and marketers shouldn’t stop talking about multicore. Of course, this is specific to one’s own market but if you are in the embedded business these are still the early days. The embedded market is really a vast mixture of different types of products and applications and the applicability of multicore chipsets is different for each. In networking applications, multicore has come and is already designed in. In fact, these companies are on 2nd and 3rd generation chips with 8 to 16 cores. Industrial and medical companies are following but likely to adopt multicore chips in the coming 2-3 years. Some market segments may take even longer, such as aerospace and defense.

One thing I’ve learned about the embedded market place is that its not easily described and defined. For example in consumer devices the time to market windows is months and things must happen now! In defense systems, lifecycles can be decades and next generation products must be carefully planned to fulfill their expected lifespans. What this means to the marketing department is: don’t stop talking about multicore! Each of your target customers has their own unique adoption cycle and until the majority of the market is on multicore platforms, it’s a bad idea to stop talking to the market about what you have to offer.


I think multicore is a disruptive technology in computing and its forcing a lot of development teams to rethink the way they do things. Whenever companies are forced to sit up and rethink things, there is opportunity. Opportunity for the companies to improve things like performance, thermal profile, bill of materials costs. For vendors, this is the opportunity to get close to your customer and help them solve the issues they are likely to face and, hopefully, sell them some product along the way.

Software Can Kill, Really

19 10 2009

Image from

Image from

Wired recently published an article on a software bug in a gamma knife device which is used in various cancer treatments. This device is used to focus high levels of gamma radiation at specific points in the body and provide more “surgical” precision to radiation therapy. The bug they found in this case was that the emergency stop button did nothing when pushed. It is supposed to retract the patient from the machine and turn off all radiation emissions. Now the manufacturer has claimed that additional radiation exposure was minimal, one could see how this could create a very dangerous problem for the patients and the staff controlling the device.

This article brought back memories immediately of the Therac-25 incident (and the Wired article points this out as well) in which several patients were given lethal doses of radiation due to software defects and operator misunderstanding. Regrettably, the Therac-25 was developed right in my home town of Ottawa, ON. Not a proud moment in Canadian software history.

These two incidents point out the fact that software can quite easily kill and the need for not only the right standards (e.g. FDA 510(k)), but also the need for proper design and testing. Of course, I’m going to emphasize the need for tools because, unfortunately, exhaustive testing of these devices is extremely expensive in time and money. We still have a long way to go.

Are hardware guys/gals smarter than us?

6 10 2009

Do you think we software guys/gals could design something as complex, using our current processes, tools and working methods as the latest Intel Core i7 processor? It has 731 million transistors and is likely one of the most complex man made objects. I seriously doubt it, certainly not in the time frame that Intel’s hardware engineers can.
How many of the IC designers create chip layouts by hand? Hmmm, none. How many milestone builds, alpha and beta runs do they do? Likely very few. How many times do they leave major defects in shipping product or let customers test the end product? Not for long (since bankruptcy would soon follow).

There are many reasons why they can do what they do and software engineers struggle with the complexities of their job. A major reason is the early realization in the field of IC design that tools are absolutely necessary to get the job done. They use very advanced CAD, simulation tools and high level programming languages like VHDL (which is related to Ada). They also depend heavily if not completely on reusable components. No chip manufacturer would survive by reinventing every ALU or cache controller for each new chip. The combination of advanced CAD tools and reusable components is the key productivity enhancer. For creating quality products, IC designers create models of their products and test them thoroughly before committing anything to silicon. Morever, they simply cannot afford to get it wrong. Intel would suffer serious problems if their latest microprocessor turned out to be a dud. It’s a competitive market you can’t afford a false step.

Software engineers rarely use advanced modeling and simulation tools and high-level languages are often scoffed at. Now, someone will likely argue that, “the tools just aren’t good enough! We don’t have that fancy technology that hardware engineers have!” Early in my career, the first large colour screen workstation that I used, in 1989, was an Intergraph Interpro 32c. Guess who these were for? Us software guys? Nope, it was for PCB layout for the hardware engineers.

Software engineers could have tools like the IC designers if there was a fundamental commitment by the profession to use tools, simulation, high level languages and quality driven development processes. I argue that without the demand for the tools that could make the work easier they won’t be built. I’m sure early CAD tools were awful and I’m sure the engineers complained how it was easier to do it by hand. Somebody persevered because they can’t be saying that today. Yet software engineers are just as likely to pick C, Vi and gcc as the only tools they need just as they would have 10 or 20 years ago. They say “those tools can’t do the magic I do with Vi and C!.” If the commitment to changing the way we do software exists, the processes and tools will come.

There is some hope, such as Motorola’s drive to implement their Six Sigma process across hardware and software. I also think that driving this into the software engineering curriculum is necessary and the ACM and IEEE are working on this. See the SE 2004 proposal. However, from my own education, most of the professors knew how to teach programming concepts and languages but didn’t really have much to offer in terms of processes, tools, quality, methodologies, etc. They simply didn’t have the experience or background to back up that learning. Moreover, industry wants engineers who know things like C, assembler, real-time programming not how to do things though development processes, modeling and simulation (unless, of course, they know all the nuts and bolts stuff too!). Our ability to compete and to build the next generation of products depends on an evolution in the way we do things. I’m not seeing it.