What Is Table-Driven Design?

TechDogs Avatar

If you've ever been to a restaurant and had a waiter, come up to your table and ask, "would you like fries with that?" and then proceeded to follow up with a question about whether you'd like cheese on top of those fries, then you're already familiar with the table-driven design! A table-driven software engineering approach simplifies and generalizes applications by separating program control variables and parameters (rules) from the code and placing them in separate external tables. The goal is to decouple program control data from application logic and emphasize modularity to ease change management. When you think of a table, do you think of a nice place to have dinner? or maybe you think of a piece of furniture that holds your clothes. What if I told you that tables are the best way to transform information from one kind into another? Yes, that's right. You can change the names of cities, the distances between them, their populations, or land area differences depending on the information available in other tables or parts of the same table. You can also use decision tables to directly transform collections of conditions into a series of actions or procedures. Using tables makes it easy for designers/programmers to understand what they're doing immediately because they represent relationships graphically and concisely. Those things have value if we've learned anything from eating at restaurants and organizing our closets! Tables are an excellent way to collect data, but they aren't the only way. Tables are just one form of a table-driven design. What's a table-driven plan? Well, it's when a program uses something other than a table to organize its data. The program could use an array instead of a table or maybe it's using lists or files or even graphs and charts! But just because a program uses those things doesn't mean that it's using a table-driven design it just means it's using some data structure!

TechDogs

Related Terms by Software Development

Scanning Electron Microscope (SEM)

The scanning electron microscope combines two of the most valuable types of microscopes: They function in the same way as a standard microscope but are superior. Imagine you are looking at the very tip of your nose right now and attempting to see what's there. To get a close look at those minuscule hairs, you would need a powerful microscope, and if you squinted your eyes that intently at your face, you would probably have a headache. Imagine instead employing a scanning electron microscope, in which case the electrons would perform all the work for you. Since electrons make it possible for visual display results to have better integrity and resolution, objects can be seen more clearly and be used for cutting-edge research and engineering. You may not believe anything like this might be beneficial in regular life, but it absolutely is. We wouldn't be able to see how the tiny parts of bugs work together to form a whole, nor would we be able to see how much space there is between each atom in our bodies if we didn't have scanning electron microscopes. We would know nothing about our world if it weren't for the scanning electron microscopes that are currently in use. An electron beam is used to analyze whatever is being viewed in a scanning electron microscope, which is a type of microscope. It is also known as an SEM, and it is really interesting. The SEM traces the paths that electrons go through in an experiment. An electron gun is responsible for releasing electrons, which can be thought of as a light bulb that releases electrons rather than photons (light particles). Then, after passing through a few different components, such as scanning coils and a detector for backscattered electrons. You now possess some images obtained from the SEM! The backscattered electrons are transformed into signals and then delivered to a display screen. So as you're doing it, you're looking at photographs of your product on your computer or television screen - that's awesome!

...See More

Secure Hash Algorithm (SHA)

Secure Hash Algorithm is a set of algorithms developed by the National Institutes of Standards and Technology and other government and private parties. Cryptographic hashes (or checksums) have been used for electronic signatures and file integrity for decades. However, these functions have evolved to address some of the cybersecurity challenges of the 21st century. The NIST has developed a set of secure hashing algorithms that act as a global framework for encryption and data management systems. The initial instance of the Secure hash Algorithm (SHA) was in 1993. It was a 16-bit hashing algorithm and is known as SHA-0. The successor to SHA-0, SHA-1, was released in 1995 and featured 32-bit hashing. Eventually, the next version of SHA was developed in 2002, and it is known as SHA-2. SHA-2 differs from its predecessors because it can generate hashes of different sizes. The whole family of secure hash algorithms goes by the name SHA. SHA-3, or Keccak or KECCAK, is a family of cryptographic hash functions designed by Guido Bertoni, Joan Daemen, Michaël Peeters, and Gilles Van Assche. SHA-3 competition to develop a new secure hash algorithm was held by the United States National Security Agency (NSA) in 2007. To be a super safe and fast hashing algorithm, SHA3 was developed from this contest. The evolution of cybersecurity has led to the development of several "secure hash algorithms." Security is a crucial concern for businesses and individuals in today's digital world. As a result, many types of encryption have been developed to protect data in various scenarios. One of these is hash algorithms. All secure hash algorithms are part of new encryption standards to keep sensitive data safe and prevent different types of attacks. These algorithms use advanced mathematical formulas so that anyone who tries to decode them will get an error message that they aren't expected in regular operation.

...See More

Segregated Witness (SegWit)

It is time to get this party started! SegWit is an agreement implemented in the Bitcoin cyber currency community. It is also a soft fork in the Bitcoin chain and has been widely accepted by miners and users. So what does it all mean? In short, if you are running a node (a piece of software that helps keep the Bitcoin network stable), you need to upgrade your software by April 27th, or else your node will stop working. SegWit was activated as part of a hard fork on August 24th, 2017. The most important thing to note about SegWit is that it fixes transaction malleability, which has plagued miners and users for years. However, you do not need to worry if you do not want to upgrade your software. You will still be able to use Bitcoin just fine! It is confusing, but it is not that confusing. Segregated Witness (SegWit) is a proposal to improve Bitcoin implemented in August 2017. It allows for more transactions per block, which means lower fees and faster transactions.SegWit2x is a proposal that would include a hard fork months after the initial adoption of SegWit, creating two bitcoins. One of these versions would have SegWit, and one wouldn't, but both would be called "Bitcoin" and act as separate currencies. BIP 148 is another proposal that includes a user-activated hard fork and proposes implementing SegWit.SegWit is a soft fork, not a hard fork. SegWit is a technical improvement that allows more transactions to be processed simultaneously, making the network faster and more efficient. A hard fork is when developers propose changes to the protocol. If most users accept those changes, there will be two versions of that particular cryptocurrency, one for each side. The Bitcoin Cash (BCH) chain split from Bitcoin in August 2017 as an example of a crypto hard fork. Bitcoin Cash is the result of a hard fork.

...See More