Ottawa Hydro’s MyHydroLink

15 07 2011

Hydro Ottawa’s new MyHydroLink data is enlightening and frightening, need to cut back on peak usage!!





Good read: Agile Ruined My Life

14 09 2010

Daniel Markham has written an insightful post on the agile development on his “What to Fix” blog: Agile Ruined My Life.

I’ve mostly stayed away from the Agile, Scrum, and XP debate since I’m not in that business anymore. I’ve always felt that this was invented before (we taught this at Rational as iterative and incremental development) and that there a lot of hype. This article points out some of these pitfalls, in particular the people who claim to be experts in the field.





Multicore article published in Electronic Component News

25 06 2010

Simplify Multi-core by Understanding Key Use Cases | ECN: Electronic Component News.





The Multicore transition: tools are key to success

2 02 2010

[From my Wind River blog:]

For embedded software companies where so much emphasis is given to supported hardware, operating systems and middleware technologies, tools can get ignored in the fray. This is unfortunate because tools are key to project success because they can make the difference between meeting a deadline or missing it by many weeks. Tools are all about developer productivity and getting problems solved faster …The Multicore transition: tools are key to success – Bill Graham.





For embedded systems, going “green” will come from customers

1 12 2009

In embedded systems and likely in other markets, the need to go “green” will come from customer demand for reduced power consumption rather corporate citizenship. This might be stating the obvious in that, of course, companies are far more motivated by money or costs than by the need to protect the environment. However, in embedded systems the move to lower power consumption is becoming a more universal concern than one specific to mobile or other battery-powered devices.

Power management has traditionally been the concern for devices that needed to have good battery life. In fact, battery life for mobile devices can be a differentiator in the marketplace. We all hate when our mobile phone runs out of power just when we want to phone home.

Power management and system power consumption are becoming more a general problem in embedded systems. Certainly, power consumption because of thermal concerns has always been a system design problem but how much electricity consumed by a system has been less so. Consider the mobile phone network infrastructure. There is huge computing power put in every base station and they consume a huge amount of power. It’s estimated that mobile networks worldwide consume 43 billion KW/h per year (roughly 2W per user). Assuming residential power rates here in Ontario ($0.08/KW/h) that is $3.4 billion per year. Power consumption in the mobile networks will become a big focus as electricity rates climb. The motivation will be mostly to reduce costs but the side benefit will be reduced electricity consumption. Companies can then claim they are going green and look good while saving money at the same time. A win-win situation.

On the technical side, what will enable this? Already we are seeing higher computing power with lower  TDP in multicore processors and supporting chipsets. This trend will continue as the customer demand for lower power intensifies. Unfortunately, the software side is behind the curve in this respect. In general, embedded RTOSs have poor support for power management. This need to change quickly to enable the power saving capabilities in the processors. Once the hardware and software capability is there, I think we’ll see an increasing use of power management in the non-traditional markets.





Why low-power Wi-Fi is so important

10 11 2009

I saw a announcement on Twitter from @EPNMagazine about Acal Technology’s new Low-Power Wi-Fi Module. I’m not singling out this device in particular but I think these new super low power wireless devices are revolutionary for embedded systems. Why? Because many embedded systems run independently of each other and disconnected from any sort of central control or dispatch system. When they are interconnected it has traditionally been wired networking and wired power. Advances in low power processors and, now, low power wireless networking, devices can now be interconnected using just battery power.

This sounds great but why is this revolutionary? It changes the installation criteria for these devices. You no longer need to wire them up, meaning no retrofitting an office or a factory to accommodate a new system. The best example I can think of is Heating Ventilation and Air Conditioning (HVAC) controls for office buildings. When new offices are built the designers guess where it makes sense to put thermostats on the walls and an electrician comes in and wires them up. After a tenant moves in and starts to layout offices and meeting rooms it’s inevitable that the HVAC system is out of whack – rooms too hot or too cold. What if you could have interconnected, wireless smart thermostats than run on batteries that could be placed wherever (and whenever) it makes sense. Not only does this cut costs in wiring and setup it likely saves many HVAC issues down the road.

In some ways, this is reality and similar products exist. But, as hardware prices drop, the functionality of these battery powered devices will increase, as will their ubiquity. When they do, it will just be a software problem to figure out how to talk to all these new smart devices online….





Multicore to Many-Core: Hardware too far ahead of software?

28 10 2009

from Tom's Hardware (tomshardware.com)

Following my last post on “Keep talking about Multicore” a few colleagues pointed out that discussing Many-Core is applicable too. I certainly don’t disagree since Many-Core processors are coming as TNBT (The Next Big Thing). I think we will see adoption into specialized markets sooner than others. In particular networking which is already comfortable with multicore will use the many core tiles to speed up specialized processing. I can see applications in other areas such as imaging and devices that need higher end processing of audio, video or graphics.

In general, as I stated in my previous post, the embedded market can be slow to adopt new technology in specific market verticals – not because they are old school but because the product timelines and planning are years in advance and product life cycles are much longer. The move to multicore is underway and many-core in the near-term will be more niche than commonplace. Why? I think because we are still coming to grips with the software complexity of multicore. Moreover, tools are still catching up to multicore, how will we debug a 100 core system? Many-core will likely be handled the way heterogenous multicore solutions are today (e.g. general purpose CPU plus DSP or GPU). Vendors might supply libraries that take advantage of many cores will leaving a small number for general purpose processing. I have heard customers say that, heterogenous systems are particulary hard to debug because they are usually supplied with two disparate tool chains. Interestingly, an article from 2007 was quite prescient about this very topic, supporting my premise that hardware has leapfrogged the software – of course, would I quote an article that didn’t?

In the near term, I think we will see 16 to 32 core chips which will be “quite-a-few-core” systems. In these cases, I think a hypervisor solution can bring sanity to the solution by allowing you to create several virtual targets in one. For example, you could create a four core controller plus 12 specialized processing engines (this is being done today for deep packet inspection). This is manageable because the application complexity is isolated to your four-core target and the specialized processing engines are identical and relatively simple.

Whatever way the many-core technology pans out, I still contend that software is behind the curve versus the hardware. And I still won’t stop talking about multicore…