Ancient Technical Articles
The external links are below followed by the actual articles.
Software Project Survival Guide [below]
Consider the Alternative
(Click Editorial Reviews)
Embedded Microprocessor Systems: Real World Design [below]
(Click Editorial Reviews)
Microsoft held Windows CE DevCon in San Jose, California on April 6-8, 1998. Michael Fitzpatrick, a DDJ contributor, had the opportunity to chat with Tony Barbagallo, Microsoft’s Lead Product Manager, and Don Chouinard, Microsoft’s Product Manager, about Windows CE. Here is an excerpt from that conversation. For more details about the conference itself, read Al Stevens’ account in the July 1998 issue of DDJ.
DDJ: There’s been some talk about bringing COM to WinCE. What are Microsoft plans?
DC: COM’s in there now. It is InProc only. It doesn’t marshal parameters across process boundaries or across the wire…yet. That will ship in our next major release. All of our compilers support COM today.
DDJ: Microsoft has put a lot of work into Windows CE, and it recently bought WebTV. Microsoft already makes mice and keyboards, but those are passive devices. A lot of folks are worried that Microsoft will start selling computer hardware now.
TB: I can’t speak for the WebTV group because I’m tasked with marketing Windows CE, and I don’t really know what is going on with WebTV. I don’t believe that Microsoft has any intention of building and selling computers that run any of our OSes, with the exception of WebTV. We’re mainly interested in developing relationships with technology partners who can provide solutions for the embedded market. That’s kind of an expensive platform, so we go beyond that and also support an emulation of specific targeted devices, such as handheld PCs, Palm PCs and Auto PCs. You can write a lot of code and test it against the emulated device. This is a good solution for routines that don’t require the presence of physical hardware.
DDJ: Windows CE looks a lot like a desktop system when it is installed in a Handheld PC (HPC). If I want to add a new peripheral device, does Windows CE have installable device drivers?
DC: Yes. In fact, it is very elegant. You take your installable device driver, load it into the object store, and put an entry into the registry. Upon boot, we scan the registry and load each of the installable device drivers in turn. You can even control the order in which the device drivers are loaded, in case you’re relying on another device driver being there.
DDJ: But, if I want to go all the way down to the hardware and trap an IRQ, don’t I need to link the Interrupt Service Routine (ISR) binary with the operating system code when I compile it?
TB: The drivers that go all the way down to the hardware — they’re loaded at compile time.
DC: The person who builds the operating system for a specific hardware platform does include built-in device drivers for certain devices such as the display and keyboard. But most of the device drivers they provide are written as installable drivers. In Windows CE, interrupts (IRQs) go into the kernel and, in just a few machine instructions, the kernel passes control directly to the ISR. This is how an installable device driver is tied to an IRQ.
DDJ: At the Embedded Systems Conference last October, no one claimed that Windows CE had hard real time. Now, at this conference (Windows CE DevCon), you’re saying that WinCE is hard real time. Okay, so the interrupts are not reentrant, but you’re claiming to be hard real time. You’re talking about “jitter” now, not reentrancy and nested interrupts.
TB: Well, Windows CE interrupts are reentrant at the thread level. The mechanism we use is, the kernel calls an interrupt service routine (ISR) usually within one to eight microseconds from when interrupt occurs. At that point, INTs are still disabled. We expect the ISR to be coded in assembly language and to execute very quickly. The ISR runs in privileged mode and has full access to system resources. When it is done, the ISR returns to the kernel, which schedules an interrupt service thread (IST) that does most of the work. The IST is reentrant, and can be nested if it is coded properly. The IST runs in user mode, just as any other application does, but at a higher priority.
DDJ: But this is not truly hard real time.
TB: No, not yet, but we’re committed to true hard real time in a future release, version 3.0 due in the second quarter of 1999. Even though we’re already below 10 microseconds latency, we’re going to support nested interrupts. It’s not just the speed. We’re going to add support for a larger number of interrupt lines and semaphores.
DDJ: Well, with Windows CE, it looks like everything that was old is new again. After years of getting comfortable with multi-meg RAM and multi-gig hard drives in OSes, Microsoft is back to counting bytes and CPU cycles again. Aren’t you afraid that feature creep will take you out of the embedded system space? When you start adding things like Java, which adds 2 MB to the system ROM and 2 MB to system RAM, aren’t you taking away the incentive to use Windows CE?
TB: Well, Windows CE is componentized, and it is a deterministic OS, unlike the other Microsoft OSes. We clearly define what our upper bound for event response is. And if you don’t want Java or a GUI or any other feature, you can leave it out.
DDJ: What about network and Internet connectivity? What direction is Microsoft looking at? Right now, it seems that Microsoft is pushing Ethernet and NDIS drivers for connectivity.
TB: The market will dictate where we go with that. We have to be responsive to our customers needs.
DDJ: Yes, but if I’m a developer, I’m going to pick the connectivity solution that comes with Windows CE. Microsoft has a vested interest in promoting some connectivity options over others. You have WebTV, which implies cable modems, an ADSL deal in the works, and sometime in the future, you’ll be selling satellite connections. If I make an ATM switch, how can I get Microsoft to add support?
TB: We try to keep the system very modular. If you were building one of those boxes, you would go to someone with a background in the technology and convince them to port it over to Windows CE for your device.
DDJ: Well, where do you think that the technology will go? Is the future in ADSL?
TB: That’s a real possibility. Part of my role is to work with technology partners and help them decide whether it makes sense to port their technology and offer it as a third party. We have a base set of enabling technology, but beyond that, we really want to foster third-party business and a third-party market.
DDJ: Well that brings up an interesting point. A year ago, there was only a handful of Windows CE distributors. Since then, the story from Microsoft has been changing a lot. Every month it is a different story. Seems like now, there are just too many vendors offering Windows CE, and the vendors are competing with each other.
TB: Absolutely. When I came into this business, I looked at the landscape and looked for distribution channels that had differentiation from the current distributors. That’s where Applied Microsystems and Microtec come from, that’s where Integrated Systems Design comes from. It’s hard to beat their expertise in the real-time embedded systems space. You may say it looks like these guys raised the bar on the people that were there. Maybe we have. We have very objective criteria that basically comes down to units shipped.
DDJ: When you first started to work with technology partners Microsoft had very strict licensing requirements. Now you’re giving away the development tools. I got a copy — three CDs — in a recent issue of Embedded Systems Programming.
TB: Yes. That all changed as well. The decision was to make it widely available. Our goal is to make a huge third party network of system integrators and supply the tools and technology, because that’s how we think the market is going to grow in this space.
DDJ: So, if I have a new CPU that I want to offer to the world, who is going to port it over to Windows CE?
TB: We do the ports for new processor architectures, but only based on customer demand or based on a customer relationship with a semiconductor partner.
DDJ: The last issue that I’m interested in is the legal issue, the FTC/DOJ, trade issue. Now that you’re entering a new region of the marketplace, are you taking any action to prevent any conflicts with the government?
TB: We’re only concerned with developing a product that is right for the market.
DDJ: Are you ignoring the reality of the government?
TB: On my list of tasks, one of them is not to take into consideration ramifications, the potential of monopolistic products that you may build. That is not one of my objectives. It is not one of my manager’s objective either. It’s just not a consideration for us. We are concerned only with making Windows CE a success in the embedded system space. You know, we’re not the only big player entering the embedded system space either. Sun is getting into the RTOS business in a big way.
And some of the big RTOS vendors like WindRiver make a majority of their money in setup costs for their tools. So Microsoft’s business model is that we collect a royalty for each unit that goes into production. The cost for our tools is very low compared to other RTOS vendors. We charge $500 to get started compared to $25,000 for some other RTOS tools. We aren’t looking to make much money on the tools and startup expenses. A lot of embedded systems never make it into production. In that case, the other RTOS vendors are ahead. They’ve sold their tools, they’ve sold their services. They didn’t get the run-time royalty, but they’ve sold the tools and made 80 percent of what they would have if the customer had gone into production. It’s the same for our systems integrators such as Microtec, AMC, and ISI. They receive the revenue from their tools and their services, but Microsoft would receive no run-time royalties.
Mastering Regular Expressions [top]
Powerful Techniques for Perl and Other Tools
Jeffrey E.F. Friedl
O’Reilly & Associates, 1997
342 pp., $29.95
If you’ve spent any time crafting “regular expressions,” you know they are the closest thing there is to mangled punctuation. And because they can mean different things to mathematicians and programmers, regular expressions are difficult to define, too. Indeed, the meaning changes from tool to tool. As used in Jeffrey Friedl’s Mastering Regular Expressions, the definition “special search strings that match patterns of data (typically text), rather than specific sequences of bytes or characters” is sufficient.
To appease the theorists, but mostly for notational convenience, regular expressions will be referred to here as a “regex.” It is important to note that there is no standard for regex. Each tool defines its own regex syntax and the extent to which it is implemented. Some valuable regex features are not always available in every tool.
Mastering Regular Expressions is about regex, not Perl. Friedl covers regex in Perl, but says nothing about the many other Perl language features. Still, many people think of Friedl’s book as a “Perl” book. (In fact, it is even miscategorized as a Perl book by the Library of Congress.) Granted, Perl is a language that includes seamless use of regex as its main feature, and Perl’s regex implementation is unsurpassed. Understanding regex is vital to using Perl effectively. Anyone who programs in Perl for a living would not argue with that.
However, regex is found in many places, including languages (Python, Tcl, and Expect), tools (awk, lex, and grep), and editors (Emacs, vi, and sed). It can save you lots of time, if you are willing to learn it. Friedl spends time discussing many regex tools in Chapter 6 and dedicates all of Chapter 7 to Perl regex.
The author carefully brings us to understanding regex by example and analogy. A simple example of a regex can be difficult to read for the inexperienced user.
Creating a regex is intuitive after you have some experience, but getting the experience can be quite frustrating. When you start learning regex, you have to figure out matching problems purely analytically, which is especially difficult since your tool’s documentation of regex will most likely be inadequate and there is no regex debugger. For example, in Perl you can construct a regex that matches nested expressions using parentheses. The regex in Example 1(a), which is borrowed from page 126 of Friedl’s book, matches a parenthesized expression and allows parenthesis nesting up to one level, and will perform the match in the text pattern in Example 1(b). Just look at this regex! Now you can see what a great feat it is to write a book on regex that is actually readable.
Even with a basic understanding of regex, you can still learn a lot from reading Mastering Regular Expressions. If nothing else, the book is well researched, covering even obscure areas of regex (the POSIX regex standard, for instance). Many of the examples are practical, covering tricky problems such as matching C comment blocks, IP matching, and date matching. And Friedl’s discussion of regex efficiency is valuable. Understanding the inner workings of regex can mean the difference between writing a regex that may not match in your lifetime, or writing one that can make a quick match. As always, it is important to note that optimization can lead to a trap. When too much knowledge of a process’s internals is assumed, those assumptions can create inefficiencies when the technology changes.
One reason this book is important is because regex is intimidating — and Friedl makes it easier to understand. Many programmers don’t use the regex available in their development tools even though regex would probably save them a lot of time. Think of that the next time you find yourself stuck with a pile of someone else’s code that you need to maintain.
Mastering Regular Expressions is destined to be a classic reference on the subject it covers. If you’re just getting started with regex, this book will save you a lot of time (and grief). If you are already using regex, it will help you extend your ability and understanding.
Example 1: (a) A Perl regex that matches a parenthesized expression and allows parenthesis nesting up to one level, will perform the match in the text pattern in (b).
Dynamics of Software Development [top]
Microsoft Press, 1995
184 pp., $24.95
Upon sitting down to read Jim McCarthy’s Dynamics of Software Development, I expected a humorous perspective on structured analysis, ISO 9000, and software requirement specifications. But McCarthy doesn’t bother to bore us with such dull topics. Instead, he lays out 54 rules of the game he calls the “software project.”
In reality, the rules are just a gimmick. McCarthy’s explanation of how to play the game so everyone wins is more important. He gives us valuable insight from his experiences as project manager for Microsoft Visual C++ 1.0. Throughout the book we are treated to gems such as:
The visionary leader will conceive of a future reality that must be created by the effort of the community, while the demagogue will perceive a need to remove something from the current situation. The visionary will harness the communal psychic energy toward a common goal, something that will require the delay of gratification; the demagogue will move to immediately sate the baser instincts he or she has excited.
No, the book is not as esoteric as this quote suggests.
McCarthy writes from the perspective of a program manager who wants to be team captain — not boss, friend, or parent. Of course, the program manager must put his role into perspective.
Before the program manager can be worth anything to the team, he or she must be thoroughly disabused of the notion that he or she has any direct control.
Dynamics of Software Development grew out of a talk entitled “21 Rules of Thumb for Shipping Great Software” McCarthy used to give at customer sites. He expanded the list to 54 rules, labeling it a game because games are fun. The end result of the game is intellectual property (software), and it is much easier to create intellectual property when you are having fun. Likewise, reading a book about a game is much easier than, say, reading a book on structured analysis or project management. After reading his book, however, it seemed that McCarthy has never read a book on project management because his approach is so fresh that it could have only evolved directly from his experiences.
Nevertheless, I didn’t agree with some of what McCarthy writes — in particular, rule 4, “Don’t flip the bozo bit.” McCarthy’s point is that project managers shouldn’t get it stuck in their heads that someone is a bozo. But face it, Jim, there really are bozos in life, so deal with it. However, his “bozo bit” perspective will help me deal much better with them in the future. McCarthy explains what the bozo bit costs when it is flipped. This is important because for many people, it is very hard to clear the bozo bit once it is flipped.
Also, the blanket statement that “Most Software Sucks” is a bit extreme. If I believed that, I would not spend my waking hours writing code. The only time that software sucks is when it causes users to lose work. Boy does that suck! Most programmers write code that does what they intend, and that is good.
Still, I agree with most of McCarthy’s book. He relays much that is not obvious, yet fundamental and true. He lets us know what it is like to be on his team, without burdening us with the technical aspects of the day-to-day coding and project management details. Dynamics of Software Development is easy to read, provides valuable insight to the software-development process, and is especially important to people who haven’t had the pleasure of being on a software team.
Consider the Alternative
McConnell’s latest book, Software Project Survival Guide, is a recipe for the success of any software project. He has made it easy for project managers to be effective by outlining the items needed for a software project’s success in clear, tabular form and giving examples to demonstrate his concepts.
As a bonus, many of these items are online at his web site. In chapter 2, the “Survival Test (http://www.construx.com/survivalguide) sums up the important points presented in the rest of the book. I highly recommend it to everyone. It will give you valuable insight on the possible shortcomings of your own project. In chapter 7 is a “Sample Top Ten Risks List”. Using this list as a guide, you’ll be better able to identify risks and do contingency planning. And, of course, McConnell doesn’t neglect QA plans. Chapter 9 identifies the “Recommended Quality Assurance Practices and Responsibilities for This Book’s Work Products.”
In a sense, this whole book is about software quality. Software project teams have little hope of delivering their products on time and in budget without ensuring quality throughout their development. McConnell reminds us many times throughout the book:
Researchers have found that an error…tends to cost 50 to 200 times as much to correct late in the project as it does…close to the point where it originally [occurred].
By software quality, I don’t mean to say that this book covers only quality assurance (QA) issues. The QA phase is the final stage of a project where defects are found and corrected. Since exhaustive testing is the only way to be sure that software meets its design goals and is defect free, the best way to ensure quality is to build it into the development processes. Otherwise, QA can take enormous amounts of time. It is a major cause for the failure of software projects. In McConnell’s view, such processes are the most important part of developing any software project, and they are the only way to be sure that product quality is maintained for the duration of the project.
One of the biggest hurdles that Software Project Survival Guide must overcome is that many programmers view “process” as a four-letter word. They see it as rigid, restrictive, and inefficient. What they don’t realize is that they will eventually employ processes in response to situations that could be avoided if only they realized its value. They use processes reactively and may not even be aware that they are using them at all.
This book creates a valuable template for any software project manager to follow. This is especially evident if you look at the two base references McConnell cites in the preface. His interpretation of the models makes it easy for us to benefit from them. The Software Engineering Institute’s (SEI) “Key Practices of the Capability Maturity Model, Version 1.1” (http://www.sei.cmu.edu) is a gold mine of hard won industry experience. NASA’s Software Engineering Lab’s (SEL) “Recommended Approach to Software Development, Revision 3” (http://fdd.gsfc.nasa.gov/seltext.html) describes a structured sequence of practices that may be used to implement many of the processes described in the SEI document.
You can get a good idea of McConnell’s views on software project management before reading this book by visiting his web site at (http://www.construx.com/stevemcc). He has included many of the magazine articles he has written. I particularly liked “From Anarchy to Optimizing” (Software Development, July 1993). In fact, I was disappointed that he didn’t include this information in his book. This article describes The SEI Process Maturity Model in understandable terms. According to the article, only 1 percent of companies use the methods necessary to reduce cost and improve the quality of their software (Note 1). This means that the ideas McConnell helps to promote are generally not well recognized.
The Software Project Survival Guide doesn’t adequately address this point. There is very little discussion of the relationship between creating and using processes (chapter 3) and shipping software (chapter 4). From my experience, when processes are always neglected in the name of shipping product. For example, I was once denied a job because the boss thought I would focus too much on “process” and that it would distract me from the work. And the day before I wrote this review, a coworker told me that he prefers to “get to the work,” rather than “waste time in planning a software product.” This is truly a prevalent and troubling viewpoint. Personally, I enjoy the planning and design stages because they are creative stages. The implementation seems more like work and it can be frustrating work if the design and planning are hortchanged.
Also, I would have appreciated some comparison of the more common software project techniques. In my career as a consultant, I have seen many methods used to measure and control software development. Although McConnell recognizes that other management methods exist, he makes no attempt to summarize any of the methods in common use. For many programmers, this would help them understand the software project plan that he describes.
Many books on software project management are highly technical and difficult to read, like the SEI and SEL documents. Others are anecdotal, like McCarthy’s Dynamics of Software Development, which I reviewed in Dr. Dobb¦s Journal in August 1997. Compared to McCarthy’s book, McConnell’s book presents an ideal approach to project management. McCarthy’s book recounts his experiences as a Microsoft project manager. His style is informal, his book is fun to read and can be read quickly, but it will soon be forgotten. When his writing becomes awkward, the information isn’t very valuable anyway. In contrast, McConnell’s book will be remembered for a long time. He is an experienced wordsmith and develops his ideas with clarity and purpose. Even though it is easy to read, I found myself reading it slowly. Every paragraph has something to offer the reader.
Soon everyone will see the advantage of using the techniques McConnell promotes. The way software is developed is changing, and this book is moving that trend forward. McConnell shows us proven ways to cut defects to one ninth and costs to one fifth of the their present amounts (Note 2). Results like that will get anyone’s attention. To stay competitive, many more companies will adopt these methods. They are not difficult, they are not magic, there is no trick to it. They rely on patience, planning, common sense, and adapting from experience. Like McConnell’s previous two books, this one will also be widely read; it may even become his most important and most popular book.–Dr. Dobb’s Electronic Review of Computer Books
Embedded Microprocessor Systems: Real World Design [top]
Pub. Date: November 1996
“The entire point of an embedded microprocessor is to monitor or control some real world event.”– Stuart Ball
Although embedded-systems development can be a simple subject, it often falls into the realm of being a black art. Why? Mainly because of the many things that can go wrong in a design. Data sheets and application notes are notoriously hard to read, always biased towards the manufacturer’s products, and rarely warn you of pitfalls you may encounter. Often app notes are only helpful as a reference. Most embedded systems designs are unique and cannot be characterized using one manufacturer’s app notes and data sheets. In short, embedded systems can do amazing things, but trying to understand how to create systems that work well can be mysterious–unless you have a guru on staff.
Stuart Ball is an one such embedded-systems guru. His book, Embedded Microprocessor Systems: Real World Design, will help novice embedded-systems designers understand and avoid the problems they may face. This book also helps dispel the notion that embedded-system development is a black art. When I brought this book to the office, it was a particularly big hit with recent engineering graduates.
Embedded Microprocessor Systems is a quick read for anyone with a basic electrical engineering background. Many topics are covered; yet little time is wasted on obscure topics. Its real-world examples are especially valuable, including Ball’s “DOs” and “DON’Ts.” Real hardware is presented in examples that include schematics and code.
The rules for avoiding problems in embedded-systems design are fundamental. Experienced designera are familiar with them, and most of these engineers didn’t pick it up from a book. They learned about design pitfalls either from an expert on their project, or the hard way–by finding and correcting problems in their own designs.
Reading this book will save you more time during the design and debugging steps than you spent reading the book. One thing I especially like about Embedded Microprocessor Systems is that it is very concise–about 38,000 words concise, in fact. Of course, you sometimes pay a price for this. There are instances where a numbered list and a diagram or figure would work better than presenting an idea in paragraph form. For example, the description of accessing dual-ported RAM (page 127) is not as clear as it could be. The memory-access sequence would be easier to understand if it had been written as a list and referenced a timing diagram.
Ball discusses system design in Chapter 1. System design is important because, as Ball says, “if you don’t know where you’re going, how will you know when you get there?” To summarize (from Chapter 1):
The documentation procedure that I have found useful and that will be followed here is as follows:
- Product Requirement Definitions
- Functionality Description
- Processor Selection
- Hardware Design
- Firmware Design
These steps are not necessarily serial.
Still, I would have preferred more on design in Embedded Microprocessor Systems, since that is the fun part in any embedded-systems project. Building hardware, writing code, and debugging the system is more like work. In the real world, design rarely gets the time and attention it deserves. Hindsight makes it easy to say that the system design was shortchanged. But it is hard to know ahead of time where we will need to spend design time to avoid the problems we’ll encounter. Many project managers are eager to get right to the implementation, so they can show their boss something. I hope that they will spend at least as much time on design as Ball recommends.
Ball generally discusses design appropriately for the type of examples he presents. But I would prefer to see state transition diagrams in addition to flow charts for his examples at the end of the book. And his description of state and data-flow diagrams is inadequate. As design tools, I believe they are more important than he acknowledges them to be. To be fair, there are entire books written on state-transition diagrams, data flow-diagrams, and design methodology.
Chapter 5, “Adding Debug Hardware and Software” is insightful. Understanding this topic is the key to any successful embedded system project. It is important to consider this issue in the design phase, so that debug facilities can easily be added to a system during integration, as they’re needed. Although not possible in all designs, the one thing missing from this chapter (and from Chapter 4, “Interrupts in Embedded Systems”) is the obvious and simple technique of developing ISRs in a controlled environment. This is where a designer’s control over the system stimuli makes it easy to step through code and trace hardware states.
The major criticism I have Embedded Microprocessor Systems is that it just isn’t big enough. There are many things left unsaid. I would have liked to see some real examples using real-time operating systems, since they are becoming more important in embedded systems every day. And with Microsoft’s Windows CE, the RTOS business will heat up quickly.
It remains true that the conciseness and size of this book is also the reason I enjoyed reading it. Ball got right to the point in the topics that he presents, and I was usually able to easily understand them. The examples are simple and well presented. They are developed throughout the book, with complete documentation in the appendices.
This is excellent primer on embedded systems. Reading this book made me eager for Ball’s next book on embedded systems.