Share this

How to develop your own embedded system

2026-04-06 03:21:13 · · #1
The Story of Embedded Systems is Older Than Moses The history of computers used to control devices or embedded systems is almost as long as the history of computers themselves. In the field of communications, in the late 1960s, computers were used in electronic telephone exchanges, known as "stored-program control" systems. The term "computer" was not common then, and stored-program referred to memory containing program and routine information. Storing control logic, rather than embedding it in hardware, was truly groundbreaking at the time. Today, we consider it how it should be. Back then, computers were customized for each application; by today's standards, they were absurd, with strange special instructions and I/O devices integrated into a single computer. Microprocessors changed all this by providing small, low-cost CPU engines for building large system modules. They introduced a fixed hardware architecture with peripherals connected via a bus and a general programming model called programming. Software emerged alongside the hardware. Initially, writing and testing software only involved simple programming development tools. The software that actually ran on each project was usually derived from modifications of drafts. Programming often used assembly language or macro languages ​​because compilers were often flawed and there was a lack of robust debugging tools. Software building blocks and standardized libraries only became popular concepts in the 1970s. Furthermore, these operating systems were only compatible with specific microprocessors. When a microprocessor became obsolete, its operating system would also become obsolete unless rewritten for the new processor. Today, many of these early systems are just vague memories; who remembers MTOS? When C emerged, the efficiency, stability, and portability of operating system development improved significantly. This was immediately apparent in management, offering hope for protecting software investments when microprocessors became obsolete. This was good news for the market. Operating systems written in C are becoming increasingly common today. Generally, reusable software has become dominant and is getting better and better. In the early 1980s, my favorite operating system was Wendon, which offered a C source code library for about $150. It was a package where you could build your own operating system by selecting components, much like ordering from a menu. For example, you could select job scheduling and memory management schemes from the library list. Many commercial operating systems for embedded systems emerged in the 1980s. This trend continues to this day, and there are many viable commercial operating systems available. Some giants have emerged, such as VxWorks, pSOS, Neculeus, and Windows CE. Many embedded systems don't have an operating system at all, only loop control. For some simple devices, this is sufficient, but as systems become increasingly complex, an operating system becomes necessary, or the software becomes incredibly complex. Unfortunately, some dauntingly complex embedded systems are so complex simply because their designers insisted on forgoing an operating system. Gradually, more embedded systems need to connect to various networks, thus requiring network functionality. Even hotel doorknobs have embedded microprocessors connected to the network. For embedded systems that merely encode control loops, adding network functionality increases system complexity to the point of requiring an operating system. Besides commercial operating systems, there are numerous dedicated operating systems. Most of these are drafts, such as Cisco's IOS; others are derived from other operating systems. For example, many operating systems are derived from the same version of Berkeley Unix because it has full network functionality. Others are based on major operating systems, such as KA9Q from Phil Karn. Linux is a new member of the embedded system family with many advantages. It is portable to many CPUs and hardware platforms, stable, powerful, and easy to develop with. Toolkits Overcome ICE Barriers The key to developing embedded systems is the availability of toolkits. Like any job, good tools make the job faster and better. Different stages of development require different tools. Traditionally, the primary tool used for developing embedded systems was the Internal Circuit Emulator (ICE), a relatively expensive component embedded in the circuitry between the microprocessor and the bus, allowing users to monitor and control all signals entering and leaving the microprocessor. This was somewhat difficult because it was a heterogeneous component, potentially causing instability. However, it provided a clear view of the bus operation, eliminating much guesswork about the underlying hardware and software workings. In the past, some work relied on ICE as the primary debugging tool throughout the development process. However, once the initialization software has good serial port support, most debugging can be done without ICE using other methods. Newer embedded systems utilize very clear microprocessor designs. Sometimes, the corresponding initialization code already exists, enabling rapid serial port operation. This means that work can be done conveniently without ICE, reducing development costs. Once the serial port is operational, it can support various specialized development tools. Linux, based on the GNU C compiler, works as part of the GNU toolchain with the gdb source debugger. It provides all the software tools needed for developing embedded Linux systems. This includes some typical debugging tools for developing embedded Linux systems on new hardware. 1. Write or implant boot code. 2. Print encoded strings to the serial port, such as "Hello World" (I actually prefer "Watson, Come here I need you," the first word often used on the phone). 3. Implant gdb target code into the working serial port, which allows communication with another Linux host system running gdb programs. Simply tell gdb to debug the program via the serial port. It communicates with the gdb target code on the test machine via the serial port. You can debug C source code, or use this function to load more code into RAM or Flash Memory. 4. Use gdb to make hardware and software initialization codes work when the Linux kernel boots. 5. Once the Linux kernel boots, the serial port becomes the Linux control port and can be used for subsequent development. Using kgdb, the kernel debug version of gdb, this step is often not required; if you are connected to a network, such as 10BaseT, you may need to boot it next. 6. If a full Linux kernel is running on your target hardware, you can debug your application processes. Use other gdb or graphical overlays of gdb, such as xgdb. What is a real-time system? Embedded systems are often mistakenly categorized as real-time systems, even though most systems do not typically require real-time functionality. Real-time is a relative term; purists often strictly define it as responding to an event in a predetermined manner within an extremely short timeframe, such as microseconds. Gradually, strictly real-time functionality within such short time intervals has been implemented on dedicated DSP chips or ASICs. Such requirements are only present when designing low-level hardware FIFOs, distributed/aggregated DMA engines, and custom hardware. Many designers are anxious about real-time requirements because they lack a clear understanding of real-world needs. For most systems, a near-real-time response of one to five microseconds is sufficient. Similarly, soft requirements are acceptable. For example, Windows 98 crash interrupts must be processed within 4 milliseconds (±98%) or 20 milliseconds (±0%). These soft requirements are relatively easy to meet, including context transition time, interrupt wait time, task priority, and sequencing. Context transition time was once a hot topic in operating systems. In short, most CPUs handle these requirements well, and with CPUs now much faster, this issue is less significant. Strict real-time requirements are typically handled by interrupt routines or other kernel environment driver functions to ensure stable performance. Waiting times, once a request for service occurs, largely depend on interrupt priority and other software that can temporarily mask interrupts. Interrupts must be handled and managed to ensure timing requirements are met, as with many other operating systems. In Intel x86 processors, this is easily handled by Linux Real-Time Extensions. This provides an interrupt handling scheduler that runs Linux as a background task. Critical interrupt responses do not need to notify Linux. Therefore, much control over critical clocks is possible. An interface is provided between the real-time control level and the more relaxed basic Linux level, providing a real-time framework similar to other embedded operating systems. Therefore, real-time critical code is isolated and "designed" to meet requirements. The results of code processing are handled in a more general way, perhaps only at the application task level. Embedded System Definition One viewpoint is that if an application has no user interface, it must be embedded because the user cannot directly interact with it. This is, of course, a simplification. A computer controlling an elevator is considered embedded: buttons select floors, and indicator lights show the elevator's stops. For networked embedded systems, if the system includes a web browser for monitoring and control, this boundary becomes even more blurred. A better definition focuses on the system's centralized functions and primary purpose. Because Linux provides the basic kernel for embedded functionality and all the user interfaces you need, it is multifaceted. It can handle both embedded tasks and user interfaces. View Linux as a continuous, unified whole, from a fragmented, microkernel with memory management, task switching, and time services to a complete server supporting all file systems and network services. A small embedded Linux system requires only the following three basic elements: a) Bootloader: The Linux microkernel, consisting of memory management, process management, and transaction processing; b) Initialization process. To make it do something and remain small, you'll also need: c) Hardware drivers: One or more applications providing the required functionality. Further functionality might require: A file system (perhaps in ROM or RAM); a TCP/IP network stack; and a disk for storing semi-transitional data and for swapping. Hardware Platform Choosing the best hardware is a complex task, fraught with politics, biases, and traditions from other projects within the company, and lacking complete or accurate information. Cost is often a critical issue. When considering cost, make sure you are considering the entire cost of the product, not just the CPU. Sometimes, a fast, inexpensive CPU, once you add bus logic and latency to work with peripherals, can become an expensive dog product. If you're looking for software, the hardware is already available. If you're a system designer, it's up to you to decide on the real-time budget and whether the hardware will perform satisfactorily. Realistically, how fast a CPU is needed to complete a task, then scale that up three times. Oddly enough, the theoretical speed of a CPU is the same as in reality, and don't forget that applications will fully utilize the cache. Imagine how fast the bus speed needs to be, including other buses like the PCI bus if available. Slow buses or buses that cause DMA blocking will slow down the CPU and cause congestion. CPUs with integrated devices are good because you only need to debug a few devices, and drivers that support general-purpose CPUs are usually readily available. In my projects, the connection between chips and peripherals often causes problems or doesn't meet our required compatibility. Because the peripherals are integrated, don't assume it will be cheaper. Try stuffing 10 pounds of Linux into a bag that can only hold 5 pounds. A common perception of Linux is that it's simply amazing for embedded systems. This might not be entirely accurate. Typical Linux distributions on PCs offer numerous features for PC users. For beginners, the kernel and tasks can be separated. The standard Linux kernel typically resides in memory, and each application is loaded from disk into memory for execution. When a program finishes, the memory it occupied is released, and the program is downloaded. In an embedded system, there might be no disk. There are two ways to eliminate disk dependency, depending on system complexity and hardware design. In a simple system, the kernel and all applications reside in memory after system startup. This is the operating mode of most traditional embedded systems, and it can also be supported by Linux. With Linux, a second possibility arises. Because Linux has the ability to "load" and "unload" programs, an embedded system can utilize this to save memory. Consider a typical system consisting of approximately 8MB to 16MB of Flash Memory and 8MB of RAM. Flash Memory can act as a file system. Flash Memory drivers are used to connect Flash Memory and the file system. Alternatively, a Flash Disk can be used. This Flash component uses software to emulate a disk. One example is M-Systems' DiskOnChip, which can reach 160MB. All programs are stored as files in Flash memory and can be loaded into memory when needed. This dynamic, "load-as-you-go" capability is a key feature supporting a range of other functionalities: it allows initialization code to be released after system boot. Linux also has many utilities that run outside the kernel. These programs typically run once during initialization and are not run again. Moreover, these utilities can run sequentially, one after another, in a way that they share with each other. This allows the same memory space to be reused to "recall" each program, just like system boot. This does save memory, especially for network stacks that are configured once and never changed. If the functionality of Linux loadable modules is included in the kernel, drivers and applications can be loaded. It can check the hardware environment and install the appropriate software for the hardware. This eliminates the complexity of using a single program to handle multiple hardware components with a large amount of Flash memory. Software upgrades are more modular. You can upgrade applications and loadable drivers on Flash while the system is running; configuration information and runtime parameters can be stored as data files on Flash. Non-virtual memory Another feature of standard Linux is its ability to use virtual memory. It is this magical feature that allows application programmers to write code with abandon, regardless of the program's size. The program overflows into the disk swap space. This is typically not possible in embedded systems without disks. Such powerful functionality is unnecessary in embedded systems. In fact, you probably wouldn't want it in a real-time critical system because it introduces uncontrollable timing factors. The software must be designed to be more lean and fit within available physical memory, just like other embedded systems. Note that due to CPU limitations, it's generally wise to store virtual memory code in Linux, as clearing it is tedious. Another reason is its support for shared text, allowing many programs to share a single piece of software. Without this, each program would need its own library, like `printf`. Virtual memory loading can be disabled by setting the swap space size to zero. Then, if you write a program larger than the actual memory available, the system will treat it as if your program has exhausted swap space; the program will not run, or `malloc` will fail. On many CPUs, the memory management provided by virtual memory can separate different programs, preventing them from writing to different address spaces. This is typically impossible in embedded systems because they only support a simple, flat address space. Linux's functionality in this area facilitates its development. It reduces the likelihood of system crashes caused by haphazard programming. Many embedded systems consciously use "global" data that can be shared between programs for efficiency reasons. This is also supported by Linux's shared memory feature, where only specified memory portions are shared. File System Many embedded systems do not have disks or file systems. Linux can run without them. As mentioned earlier, application tasks can be written along with the kernel and loaded as an image at boot time. For simple systems, this is sufficient. However, it lacks the aforementioned flexibility. In fact, many commercial embedded systems offer file systems as an option. Many are either proprietary file systems or MS-DOS-Compatible file systems. Linux offers an MS-DOS-Compatible file system, along with several other options. These other options are provided because they are more robust and fault-tolerant. Linux also has inspection and maintenance capabilities that commercial vendors often do not provide. This is especially important for Flash systems, as they are updated over the network. If the system loses capability during an upgrade, it becomes useless. Maintenance features can typically address these issues. The file system can be placed on a traditional disk drive, flash memory, or other similar media. Moreover, a small RAM disk is sufficient for temporarily storing files. Flash memory is divided into blocks. These blocks may include a boot block containing the initial software that runs when the CPU boots. This may include the Linux boot code. The remaining flash can be used as a file system. The Linux kernel can be copied from flash to RAM via the boot code, or alternatively, the kernel can be stored in a separate section of flash and executed directly from there. Another interesting option for some systems is to include an inexpensive CD-ROM. This is cheaper than flash memory and supports simple upgrades through CD-ROM swapping. With this, Linux can boot from the CD-ROM and obtain all programs from it just like from a hard drive. Finally, for networked embedded systems, Linux supports NFS (Network File System). This opens the door to many value-added features for networked systems. First, it allows applications to be loaded over the network. This is fundamental to controlling software modification, as the software for each embedded system can be loaded onto a regular server. It can also be used to input or output large amounts of data, configuration, and status information during operation. This is a very powerful feature for user supervision and control. For example, an embedded system can build a small RAM disk containing files synchronized with the current status information. Other systems can easily set this RAM disk as a network-based remote disk and access the status files over the air. This allows a web server on another machine to access the status information via a simple CGI script. Other application packages running on other computers can easily access the data. For more complex monitoring, application packages such as Matlab can be used to provide graphical displays running on the operator's PC or workstation. Where are LILO and BIOS booted? When a microprocessor first boots, it begins executing instructions at pre-set addresses. There is usually some read-only memory there, including initialization or boot code. On a PC, this is the BIOS. It performs some low-level CPU initialization and other hardware configuration. The BIOS then identifies which disk contains the operating system, copies the operating system to RAM, and redirects it. This is actually quite complex, but very important to our objectives. Linux running on a PC relies on the PC's BIOS for configuration and OS loading. Embedded systems often lack a BIOS. This necessitates providing equivalent boot code. Fortunately, embedded systems don't require the flexibility of a PC BIOS bootloader, as they typically only need to configure hardware. This code is simpler and more tedious. It's simply a list of instructions, stuffing fixed numbers into hardware registers. However, this is crucial because these values ​​must match your hardware and be executed in a specific order. Therefore, in most cases, a minimal power-on self-test (POST) module checks memory functionality, blinks LEDs, and drives other necessary hardware to boot and run the main Linux OS. This boot code is entirely hardware-dependent and cannot be arbitrarily moved. Fortunately, many systems have menu-driven hardware designs tailored to the core microprocessor and memory. Typically, chip manufacturers provide a sample motherboard that can be used as a reference for designing something more or less identical to a new design. The boot code for these menu-driven designs is usually available and can be easily modified to suit your needs. In rare cases, the boot code needs to be rewritten. To test this code, you can use a built-in simulator with 'simulated memory' that can replace the target memory. You load the code onto the simulator and debug it. If this doesn't work, you can skip this step, but it will take longer to debug. Ultimately, the code needs to run on relatively stable memory, usually a Flash or EPROM chip. You'll need some method to put the code on the chip. How you do this depends on the "target" hardware and tools. A popular method is to insert the Flash or EPROM chip into an EPROM or Flash programmer. This will "burn" (store) your program into the chip. Then, insert the chip into a socket on your target board and turn on the power. This method requires a socket on the board, but some devices don't have one. Another method is through a JTAG interface. Some chips have a JTAG interface that can be used to program the chip. This is the most convenient method. The chip can be soldered onto the motherboard, and a small cable connects from the JTAG connector on the board, usually a PC card, to the JTAG interface. Below are some common procedures for running a JTAG interface on a PC. This device can also be used for small-scale production. Robustness is more reliable than politicians' promises. When running on PC hardware, Linux is very reliable and stable, especially compared to some of today's popular operating systems. How stable is the embedded kernel itself? Linux is very good for most microprocessors. A Linux kernel ported to a new microprocessor family runs as stably as the microprocessor itself. It is often ported to one or more specific motherboards. These boards include specific peripherals and CPUs. Fortunately, much of the code is processor-specific, so the porting focuses on the differences. Most of this is in the area of ​​memory management and interrupt control. Once successfully ported, they are very stable. As we discussed earlier, boot strategies are heavily dependent on hardware requirements, and you must do some planned customization. Device drivers are even more messy: some are stable, some are not. And the choices are limited; once you leave the general PC platform, you need to write your own. Fortunately, there are many drivers around, and you can probably find one that closely matches your needs and modify it. The driver interface is well-defined. Many classes of drivers are very similar, so porting disk, network, or a series of port drivers from one device to another is usually not difficult. I found many drivers to be well-written and easy to understand, but you should still have a book on kernel architecture on hand. In my experience, Linux is at least as stable as the well-known commercial operating systems I've used. In short, the problems with these operating systems and Linux lie in misunderstandings of the minutiae of how things work, not in the complexity of the code or fundamental design flaws. Every operating system has many controversial stories, which I won't repeat here. Linux's strength lies in its open-source, well-commented, and well-documented source code. This allows you to control and handle any problems that arise. Along with the basic kernel and drivers, there are other issues. If the system has a hard drive, the reliability of the file system becomes a concern. We have over two years of experience designing Linux systems with disks. These systems almost never shut down properly. Power could be interrupted at any time. It felt great, using the standard (EXT2) file system. The standard Linux initialization script runs the fsck program, which is very effective at checking and cleaning up unstable inodes. Changing the default updater run every 30 seconds to every 5 or 10 seconds is wiser. This shortens the time data spends in cache memory before being written to disk, reducing the possibility of data loss. How to Develop Embedded Linux does have its flaws. For example, while it's not much worse than some commercial competitors, it is indeed a greedy memory user. This can be compensated for by reducing some unnecessary features, but this can take a long time and, if not done carefully, can cause significant problems. Many Linux applications use virtual memory, which is worthless in many embedded systems, so don't assume that an embedded system without a disk can run any Linux application. Kernel debugging tools are not very good, especially at lower levels. kgdb makes fault localization very easy; you just need to reboot. Unfortunately, print statements are more cumbersome. However, the worst thing for me is the psychological aspect. Linux is very flexible. Embedded systems, in general, are not flexible; and they are strictly designed to achieve their intended functions most efficiently. The current trend is to maintain flexibility, maintain the overall goal of functionality, and make as few modifications as possible. This goal is noble, but the price will be huge adjustments for specific tasks. Maintaining flexibility will lead to additional work, additional packages, and sometimes performance degradation. A recurring example is configuration. Consider configuring IP addresses in a network interface, which is usually done by running the ifconfig program from a startup script. This is a 28K program that calls data from a configuration file; it could be replaced with a few lines of code to initialize the appropriate structure. However, even if this were perfectly reasonable, it would still be harmful because it twists the software in a way that has never been used before. The application of Linux in embedded systems is feasible. It is useful and reliable. Its development costs are comparable to those of its alternatives.
Read next

CATDOLL 136CM Tami

Height: 136cm Weight: 23.3kg Shoulder Width: 31cm Bust/Waist/Hip: 60/54/68cm Oral Depth: 3-5cm Vaginal Depth: 3-15cm An...

Articles 2026-02-22
CATDOLL CATDOLL 115CM Tina TPE

CATDOLL CATDOLL 115CM Tina TPE

Articles
2026-02-22
CATDOLL 146CM Tami TPE

CATDOLL 146CM Tami TPE

Articles
2026-02-22