Seriously?

I noticed a very interesting post today on LinkedIn and I seriously have to rant about how dumb it really is. But look for yourself first and see if you’re good enough as a developer to see the flaws in it.

This is a preview image for a free course on C++ programming. Now, I’m no expert at C++ as I prefer C# and Delphi but I can see several flaws in this little preview. Yes, even though it’s a very tiny piece of code, I see things that trouble me…

And the reason for ranting is not because the code won’t work. But because the code is badly written. And a course is useless when it encourages writing code badly… So, here it is:

But keep in mind that code can be bad but if it works just fine then that’s good enough. Good developers will go for great products, not great code. It’s just that some bad code makes software development a bit harder… In general, coding practices include the use of proper design patterns and to make sure code is reusable.

So, not counting the lines with just curly brackets, I see five lines of code and two serious problems. And that’s bad. But the flaws are related because the person who wrote this C++ code seems to ignore the fact that C++ supports objects and classes. But it probably works so these aren’t really errors. They’re flaws in the design…

So, the first flaw should be obvious. I see various variable names that all start with a single letter and an underscore. The “m+” seems to suggest that these variables are all related. That means, they should be part of an object instead of just some random variables. The “m_ probably stands for “man” or whatever so you would need a class called “Man” and it has properties like ‘IsJumping’, ‘IsFalling’, ‘TimeThisJump’ and ‘JustJumped’. It suggests that Man is in a specific state like falling or jumping. And as it is a state, you would expect a second class for the “Jump” state. And probably others for “Falling”, “Walking” and other states.

The action triggered is a jump so the Man needs to be in a state that supports a jump. Jumping and Falling would not support this state so if man is in one of those states, he can’t jump. Otherwise, his state is changed to Jumping and that would start the timer. And if the timer is zero then ‘JustJumped’ would be true, right? So, no need to keep that property around.

So, all this code should have been classes and objects instead. One object for the thing that is doing something and various “State” classes that indicates what’s he doing at that moment. The fact that the author did not use classes and objects for this example shows a lack of OO knowledge. Yet C++ requires a lot of OO experience to use it properly and allow code reuse. After all, other items might jump and fall also, so they would use the same states.

The next flaw is less obvious but relates to “Keyboard::IsPressed”. This is an event that gets triggered when the user presses a key. It’s incomplete and there’s a parenthesis in front of it so I don’t see the complete context. But we have an event that is actually changing data inside itself, rather than calling some method from some class and let that method alter any data. That’s bad as it makes code reuse more difficult.

One thing to avoid when writing code is doing copy and paste of code from somewhere else. If a piece of code is needed in two or more places then it should be encapsulated in a single method that can be called from multiple locations. So this event should call a method “DoJump”, for example, so you can also have other calls to this method from other locations. For example, if you also want to support jumping on a mouse click.

Too many developers write events and fill these events with dozens of lines of code. Which is okay if this code is only called at one moment. But it’s better to keep in mind that these event actions might also get triggered by other events so you need to move them into a separate method which you can then call.

That would allow you to reuse the same code over and over again without the need for copy and paste programming.

So, in conclusion, I see a preview for a C++ course that looks totally flawed to me. No errors, just flawed. This is not the proper way to write maintainable code!

A fifth pillar for programming?

For many years I’ve been comfortable knowing that programming is build on just four pillars. These are all you need to develop anything you want and these would be:

  • Statements, or basically all the actions that need to perform.
  • Conditions, which determines in which direction the code will flow.
  • Loops for situations where actions need to be repeated.
  • Structures to organize both code and data into logical units like records and classes, databases with indices and even protocols so two applications can communicate with one another.

But as I’m developing more and more asynchronous code where you just trigger an action and later wait for the response, I start wondering if there should be a fifth one. So, should there be a fifth one?

  • Parallel tasks, where two or more flows run concurrently.
Parallel execution?

It’s a tricky one and if you learned to use flow charts and Nassi–Shneiderman diagrams to design applications then you’ll discover that these diagrams don’t have any popular option to represent parallel tasks in any way. Well, except in the way I just said, by triggering an external event and ignore the result until you need it.

Well, Nassi-Shneiderman does have this type for concurrent execution:

File:NassiShneiderman-Parallel.svg

Image by Homer Landskirty, CC-BY-SA 4.0 International.

And flowcharts also have a fork-join model to represent concurrent execution. So these diagrams do provide some way to indicate parallel execution. But the representation is one that suggests several different tasks are executed at the same time. But what if you have an array of records and need to execute the same task on every record in a parallel way instead?

Parallel execution is quite old yet I never found a satisfying way to display concurrency in these kinds of diagrams. Which makes sense as they are meant to define a single flow while parallel execution results in multiple, simultaneous flows. And parallel execution can make things quite complex as you might just want to cancel all these flows if one of them results in an error.

But it is possible to represent them in diagrams for a long time already even though their usage was rare in the past. But in modern times where computers have numerous cores and multithreading has become a mainstream development technique, it has become clear that parallel or concurrent execution is becoming an important standard in programming.

So it’s time to recognize that there’s a fifth pillar to programming now. But mainly because it has become extremely common nowadays to split the flow of execution only to join things at a later moment.

But this will bring many challenges. Especially because forking can result in similar tasks being executed while they don’t have to join all at once either. It basically adds a complete new dimension to programming and while it’s an old technique, it isn’t until recently that mainstream developers have started using it. The diagrams are still limited in how it can be executed while developers can be extremely creative.

Of course, there’s also a thing like thread pools as computers still have limits to how many threads can be executed simultaneously. You might trigger 50 threads at once yet add limitations so only 20 will run at the same time while the rest will wait for one of those threads to finish.

It becomes even more complex when two or more threats start sending signals to one another while they continue running. This is basically a start for event-driven programming, which doesn’t really fit inside the old diagram styles. You could use UML for various of these scenarios but it won’t fit in a single diagram anymore. A sequence diagram could be used to display parallel execution with interprocess communications but this focuses only on communication between processes. Not the flow of a single process. Then again, UML doesn’t even have those kinds of flow diagrams.

So, while programming might have a fifth pillar, it also seems it breaks the whole system due to the added complexity. It also makes programming extremely different, especially for beginners. When you start to learn programming then the four pillars should teach you what you need to know. But once you start on the fifth pillar, everything will turn upside down again…

So maybe I should even forget about my four pillars theory? Or add the fifth one and ignore the added complexity?

So you want to start programming?

Fanny with Laptop.jpg

It is interesting to see how many people want to start programming as they’ve played with a computer and now want to start making their own games, their own apps and their own websites. So they read about the more interesting programming languages and think they should learn PHP for web development, Java for Android or Swift for IOS. Or some other languages like Python, Go, Rust, C# or any of the other programming languages mentioned in the Tiobe Index. Why? Because these languages are very popular. But most won’t really notice that the Standard C language is also extremely popular yet everyone seems to skip it? The reason is often because the language is considered complex and difficult, it has no object-oriented paradigms, no overloading and no cool graphical libraries or even standard database functionality. Even worse, the book The C Programming Language has less than 300 pages explaining everything you need to know yet whole operating systems are written in it so it must be complex. Yet not, as it fits within 300 pages…

But if you want to learn programming, you should start with standard C simply because it teaches you the basics of programming. Sure, objects and classes are fun, but that’s not code! That’s structure! Code is organised in classes and objects to add more structure to the code you can do more with less code. But to use those structures properly, you must first know what code is.

  • Remember this line, which I will refer to as Label 1.

And code is simple. You have statements and statements are simple actions. You add two numbers, you show something on the screen or you’ll wait for input. Those are simple actions. They’re tasks and they could be very simple or very complex. But from the code perspective, they’re all tasks.

Next, you will have conditions. Conditions will determine if certain statements (tasks) will be performed or not. If the user pressed a key, close the window. If a file is not found, show an error. And even in real life we see lots of conditions because if the cookie jar is empty, we will have to fill it with cookies again.

And last, you have loops. Basically, the repetition of code. You have done an action but you need to do it again. And again. And again. And rather than repeating the same statement over and over again, you just put it in a loop which will basically run forever. Unless you add a condition to the loop allowing the control flow to just out of the loop.

Like filling the empty cookie jar with cookies. It is empty so you add a cookie. It’s still not full so you add another cookie. And another one. And more. All the way until it is full as you can stop when it’s full.

So basically programming is all about statements, conditions and loops and how control of the actions flows through all of it. And if this is too difficult to understand then read this part again by scrolling up to the line that I refer to as Label 1 If you understand this principle, continue to read.

So, you’ve just learned about statements, conditions and loops so now you know programming! The rest has nothing to do with programming but is all about the structure of code. And that part are the functions and procedures, data structures, objects, classes, interfaces and whatever more. And as you should start practicing, the best thing to start with is a simple programming language that focuses mostly on the programming part and less on structure. And that language is C.

In C the only structural parts you’ll deal with are functions and data structures. And while C is perfect for making operating systems it is also ideal to learn the basics. You can use it to make simple console applications to maintain an address book, shopping list or even a calendar. But the lack of a GUI, database support and classes actually make C harder to learn for people who are already used to those! And it’s time to forget about those things and return to the basics.

One thing to keep in mind is that standard C has some very good build-in functionality for handling dates and times so that makes it perfect for a simple agenda. But the lack of database support means you’d have to do your own file management if you want to store your data for safekeeping and sharing. But a modern computer will have gigabytes of RAM and even a busy calendar is unlikely to have more than a few thousand records of about 100 bytes each. So reading all data in memory should not be a big problem. But file input and output will be required to load and save your data to be sure it’s all safe.

So, how to start? Start by determining what kind of data you’ll be handling. Do this with pen and paper, sitting on your couch with some milk and cookies or whatever else helps you think and stay away from your computer! You start by thinking about what you’re going to make so that computer isn’t needed yet! Design first, structure next and then you can code.

So, the agenda… Say, you want an agenda to keep track of the whole family and where they are and what they’re doing. The agenda part suggests dates and times, the family means persons and the locations means addresses. But what they’re doing is undefined so we just keep it as free text. So, that’s the data that matters.

Next, what kind of control would you like? Well, it will be a console applications so the user will have to type in commands. A nice GUI would be nice but the GUI is not part of the C standard. So command line instructions it will be. Statements like “load” and “save” would be the first important thing. Then, because we’re dealing with data, we need statements to manage it so we will have “Add”, “Modify” and “Delete” followed by “Calendar”, “Person” or “Location”. These statements would almost be SQL like! And we need to be able to show data so that would be either “Show” or “Select” or “Print” or whatever you think sounds nice. Again with an indicator for the right table.

But now things become more challenging as you need indicators for the data records. Say, you want to add a new event so your statement would have a syntax like “Add Agenda <date> <time> <duration> <person_id> <location_id> <free text>” and your code would have to parse it, check if it’s all okay and then add it to the list. And by defining this simple statement you will already know more about your data structure as the agenda record will have a date, time, duration, link to a person, link to a location and some additional text while both the person and location records will have unique IDs. But that’s structure and we’re still designing here.

So we continue defining the actions we want to perform on our data. A “Delete agenda <agenda_id>” suggests that agenda needs a unique ID also. Fine! We have at least three tables and four actions per table (Create, Read, Update and Delete or CRUD) so we have at least 12 functions to define. We don’t care how they are called but we know how we will manage the data. And perhaps we have some alternate versions for these functions as a location might not always be required so we would have two “Add” functions”. Or maybe we want to delete all agenda entries for a person or for a specific day. We’re almost making our own SQL syntax here!

But we won’t go full SQL on our data as we only focus on the things we consider important. We’re keeping things simple and don’t want users to work out all kinds of complex queries as this application needs to be simple.

Interesting will be the saving and loading of the data but my advice would be to just write it all to a simple text file so you can edit it afterwards in notepad or another editor, to fix errors. It means parsing text to data and converting data to text, preferably in some easy way. It’s a good exercise to come up with a good file format.

So once we have the actions we want to perform on our data, we will have to consider what data we actually want. We have three simple tables and we already know the “Agenda” table will link to the other two. For persons, we would like to have a first name, last name, their function and maybe a phone number or birth date. For locations we want the name for the location, address and maybe even the phone extension number for the phone in that room. And while it is tempting to add more tables and stuff into this application, we have to focus on making something first and all we have by now are still notes on paper.

So, now we know what we want to make, how we will control it and what the data will look like, we can start working on the actual code. And the code will be simple as it’s a console application waiting for user commands. User types a command, presses ‘Enter’ and the application starts parsing it and tries to execute any valid command while reporting any errors.

This is a simple programming exercise which also offers plenty of distractions as you likely want to make it prettier. Don’t fall for that trap as you need something to show first! Once you have something to show, that would be version 1 and you can start version 2 now, while version 1 is save in your source control system.

You do use source control, don’t you?

Anyways, this is a project to start to learn programming as it mostly focuses on the programming itself, not the structure. When you start expanding the whole project, you can consider rewriting it in Java or C# or whatever language you prefer that supports OOP. Probably something with a nice graphical shell so the user doesn’t need to type commands but can open dialog windows and use drop-down options to select whatever they want. But all of that isn’t really programming. It’s adding structure. Call it structuring for all I care!

When you want to create applications then you should understand both programming and structuring. But keep in mind that the structure depends on the programming, not the other way around. Too many developers get lost in those structures and make things far more difficult than need be. Then again, too many developers start behind the computer, entering code instead of reading documentation, making notes and don’t think ahead. Or they did think but are trying to do way too much all at once.

The Binary Search problem

Many developers will have to learn all kinds of algorithms in their lives so they can write highly optimized code. Many of these algorithms have long histories and are well-tested. And one of them is the binary search method.

The binary search is a fast algorithm to find a record in a sorted list of records. For most people, this is a very familiar algorithm if you had to ever guess a value between 1 and 10, or 1 and 100. The principle is quite simple. You have an x amount of records and you pick the record in the middle of the list. For guessing between 1 and 100, you would pick 50. (100/2) If it is the correct value, you’ve won. If it is too high, you now know the value must be between 1 and 50 so you guess again with value 25. Too low and you pick the value between 50 and 100, which would be 75. You should be able to guess the value in up to 8 tries for values between 1 and 100.

Actually, the binary search is actually the easiest explained as bit-wise checking of values. A single byte will go from 00000000 to 11111111 so basically all you do is a bitwise compare from the highest bit to the lowest. You start with value 10000000 (128) and if the value you search for is higher, you know that first bit is 1, else it needs to be 0.

Your second guess would either be 11000000 (192) or 01000000 (64) and you would continue testing bits until you’ve had all bits tested. However, your last test could also indicate that you guessed wrong so the maximum number of guesses would be equal to the number of bits plus one.

And that’s basically what a binary search is. But it tends to be slightly more complicated. You’re not comparing numbers from 0 to some maximum value but those numbers are generally a kind of index for an array, and you compare the value at the position in the array. You basically have a value X which could basically be any data type and even be a multi-field record and you have an array of records which has all data sorted for some specific index. And these arrays can be reasonably large. Still, the binary search will allow you some very quick search.

Now, the biggest problem with the binary search is how people will calculate the index value for the comparison. I already said that you could basically check the bits from high to low but most developers will use a formula like (floor+ceiling)/2 where floor would be the lowest index value and ceiling the highest index value. This can cause an interesting problem with several programming languages because there’s a risk of overflows when you do it like this!

So, overflow? Yes! If the index is an unsigned byte then it can only hold a value of 11111111 (255) as a maximum value. So as soon when you have a floor value of 10000000 (128) and a ceiling of at least (10000001) then the sum would require 9 bits. But bytes can’t contain 9 bits so an overflow occurs. And what happens next is difficult to predict.

For a signed byte it would be worse, since value 1000000 would be -128 so you would effectively have 7 bits to use. If the 8th bit is set, your index value becomes negative! This means that with a signed byte, your array could never be longer than 64 records, else this math will generate an overflow. (64+65 would be 129, which translates to -127 for signed bytes.)

Fortunately, most developers use integers as index, not bytes. They generally have arrays larger than 256 records anyways. So that reduces the risk of overflows. Still, integers use one bit for the sign and the other bits for the number. A 16-bit integer thus has 15 bits for the value. So an overflow can happen if the number of records has the highest bit value set, meaning any value of 16384 and over. If your array has more than 16384 records then the calculation (floor+ceiling)/2 will sometimes generate an overflow.

So, people solved this by changing the formula to floor+((ceiling-floor)/2) because ceiling-floor cannot cause an overflow. It does make the math slightly more complex but this is the formula that most people are mostly familiar with when doing a binary search!

Yet this formula makes no sense if you want a high performance! If you want a binary search, you should actually just toggle each bit for the index until you found the value. To do so, you need to know how many bits you need for the highest value. And that will also tell you how many guesses you will need, at most, to find the value. But this kind of bitwise math tends to be too complex for most people.

So, there is another solution. You can promote the index value to a bigger one. You could use a 32-bit value if the index is a 16-bit value. Thus, you could use (int16(((int32)floor+(int32)ceiling)/2) and the overflow is gone again. And for a 32-bit index you could promote the math to a 64-bit integer type and again avoid any overflows.

It is still less optimal than just toggling bits but the math still looks easy and you can explain why you’re promoting the values.

But what if the index is a 64-bit value? There are almost no 128-bit values in most programming languages. So how to avoid overflows in those languages?

Well, here’s another thing. As I said, the index value is part of an array. And this array is sorted and should not have any duplicate values. So if you have 200 records, you would also need 200 unique values, with each value being at least 1 byte in size. If the index is a 15-bit signed integer then the values in the array must also be at least 15-bits and would generally be longer. Most likely, it would contain pointers to records elsewhere in memory and pointers are generally 32-bits. (In the old MS-DOS era, pointers were 20 bits, so these systems could manage up to 1.048.576 bytes or 1 megabyte of memory.)

So, let’s do math! For an overflow to occur with an index as a signed 16-bit integer you would need to have at least 16384 records. Each record would then be at least 2 bytes in size, thus you would have at least 32 kilobytes of data to search through. Most likely even more, since the array is probably made up by pointers pointing to string values or whatever. But 21 KB would be the minimum to occur when using a 16-bit signed index.

So, a signed 32-bit index would at least have bit 30 set to 1 before an overflow can occur. It would also need to contain 32-bit values to make sure every value is unique so you would have 4 GB of data to search through. And yes, that is the minimum amount of data required before an overflow would occur. You would also need up to 31 comparisons to find the value you’re searching for, which is becoming a bit high already.

So, a signed 64-bit index would have records of at least 8 bytes in size! This requires 36.893.488.147.419.103.232 bytes of data! That’s 33.554.432 terabytes! 32.768 petabytes! 32 exabytes! That’s a huge number of data, twice the amount of data stored by Google! And you need more data than this to get an overflow. And basically, this is assuming that you’re just storing 64-bit integer values in the array but in general, the data stored will be more complex.

So, chances of overflows with a 32-bit index are rare and on 64-bit indices it would be very unlikely. The amount of data required would be huge. And once you’re dealing with this much data, you will have to consider alternate solutions instead.

The alternate solution would be hash tables. By using a hash function you could reduce any value to e.g. a 16-bit value. This would be the index of an array of pointers with a 16-bit index so it would be 256 KB for the whole array. And each record in this array could be pointing to a second, sorted array of records so you would have 65536 different sorted arrays and in each of them you could use a binary search for data. This would be ideal for huge amounts of data, although things can be optimized better to calculate to an even bigger hash-value. (E.g. 20 bits.)

The use of a hash table is quite easy. You calculate the hash over the value you’re searching for and then check the list at that specific address in your hash table. If it is empty then your value isn’t in the system. Otherwise, you have to search the list at that specific location. Especially if the hash formula is evenly distributing all possible values then a hash table will be extremely effective.

Which brings me to a clear point: the binary search isn’t really suitable for large amounts of data! First of all, your data needs to be sorted! And you need to maintain this sort order every time when you add or delete items to this record, or when you change the key value of a record! Hash tables are generally unsorted and have a better performance, especially with large amounts of data.

So, people who use a 32-bit index for a binary search are just bringing themselves in trouble if they fear any overflows. When they start using floor+((ceiling-floor)/2) for their math, they’re clearly showing that they just don’t understand the algorithm that well. The extra math will slow down the algorithm slightly while the risk of overflows should not exist. If it does exist with a 32-bit index then you’re already using the wrong algorithm to search for data. You’re at least maintaining an index of 4 GB in size, making it really difficult to insert new records. That is, if overflows can occur. The time needed to sort that much data is also quite a lot and again, far from optimal.

Thing is, developers often tend to use the wrong algorithms and often have the wrong fears. Whenever you use a specific algorithm you will have to consider all options. Are you using the right algorithm for the problem? Are you at risk of having overflows and underflows? How much data do you expect to handle? And what are the alternative options.

Finally, as I said, doing a binary search basically means toggling bits for the index. Instead of doing math to calculate the half value, you could instead just toggle bits from high to low. That way, you never even have a chance of overflows.

So, you want to be a software developer? Part 2.

Here’s part two about becoming a software developer. In the first part I told you about useful resources that will help you learn and once you’re working as a developer, those same resources will help you finish your projects.

Ramona on Computer 2_0001.png

By now, you should also have a few basic computer skills. Yet if you expect me to start talking about programming languages then no, not in this part. Because, as I said before, programming languages are not important. They are just tools and a good developer can use any tools he likes, even though it sometimes takes a short moment to get accustomed to the new tools.

To become a developer, you need to understand logic and you need to be able to visualize what your project will look like. And unfortunately, not everyone is born with these two abilities, although you can train to improve both abilities.

Having a mathematical background will help, since Math requires good logic. But there are other ways to train yourself.

To train your logic I would advise to learn to play Chess, Checkers or Go. Just these simple board games that require you to think ahead and outsmart your opponent. Especially Chess is a good game because you have various different pieces, each with different values, that you have to deploy over the ‘battlefield’ of 8×8 squares to capture the King of your opponent. Shredder Chess is a good chess engine that you can play online for free or on your mobile phone or tablet. And the various pieces will force you to think more strategically.

And although a game like Stratego and other games also requires strategic insights, the advantage of Chess is that the game starts with no hidden secrets for both players. Each player knows where the pieces of the opponents are and there’s no luck involved. Then again, if your opponent makes a mistake, you could consider that lucky, but it’s just a mistake by your opponent.

Learning Chess means that you have to think ahead in the game. Each move needs to be calculated. How will your opponent respond? And what will you do next. Good chess players can see a dozen or more moves ahead in the game, calculating which moves will win or lose. And to win, you must think further ahead than your opponent.

This is why training against a computer is a good way to improve your brains. Chess engines will use databases for the opening and the end game which will allow them to play strong, but in-between they have to think just like humans, and they generally calculate just up to a few moves ahead. It should not be difficult to think further ahead of the lower levels. And to improve, you should start playing against higher and higher levels until you win about half the games you play.

Next to learning strategy, logic and thinking ahead, you need to learn to visualize. For this, the best thing to do is to read. And since programming requires a good knowledge of the English language I would advise you to read a lot of English Science/Fiction literature. The stories of Conan the Barbarian, Tarzan or even the Dragonlance series are very good reading material. But if you don’t like this genre, start reading other stories where you can imagine how the main character is walking around in the environment set inside the book. Turn the words into a movie inside your head, visualize what is going on.

If that is a bit difficult, then read a book that has also been made into a movie. Movies like “the Hobbit” and “Lord of the Rings” were made on books written by Tolkien and make very good reading material. The movie follows the books reasonably close too, so visualizing it becomes easier. Other books would be the series “A song of Ice and Fire” which is used for “Game of Thrones” or “the Southern Vampire Mysteries” which resulted in the series “True Blood“. The movies should help you to visualize these worlds that are described inside the books.

Another way to learn how to visualize is through role-playing games and preferably those tabletop games with dice. Dungeons & Dragons is such a game but GURPS is my favorite. GURPS is less about superpowers and more adjustable to different worlds and settings. In GURPS, you can play a time-traveller who goes back to WWII with the mission to take down the Nazi’s in 1940 to prevent some disaster in the future. Or you are a time-travelling wildlife photographer being sent to the Age of Dinosaurs, 65 million years ago, filming how these dinosaurs came to their ends. Or maybe you’re just a Civil War-era cowboy fighting some Alien invaders. Then again, in GURPS you can also just be a dungeon crawler, bashing monsters and gathering loot.

Thing is, you need things that will require you to visualize what things will look like inside your head. This is important, since you will be working on various projects in the future and you will need to think ahead to know what it will look like once it is done!

As a developer, you have to understand the whole process starting with a good idea that you want to work out to finishing it completely and providing it to those who can use it. If you’re a carpenter, you might decide to build some chairs and a table and visualize how it should look like while the wood is still part of a tree. Once you are done, it is likely that it won’t look like what you visualized at the start, but that’s basically because you will keep changing your mind during the whole process.

However, if you made this for a customer then it should closely match what you agreed upon with this customer, since he expects that you deliver what you have suggested to him. You might have made drawings and descriptions so your customer will expect your project to match this. Then again, even clients can change their minds and together you could agree on changing some of the details. But in general, you start each process visualizing what you are going to make and think ahead about all the steps that need to be taken to turn the idea into a final product.

Without visualization, the whole process of building your project will be awfully slow and full with challenges. (Developers don’t talk about problems! We call it challenges, since problems are things that need to be solved and solving problems is a challenge!)

So, learn to use logic, learn to visualize things.

So, you want to be a software developer? Part 1.

Ramona on Computer 1_0001.png

So, you want to be a software developer too? Great! So, where do you start?

I started with a mentor, my father. Someone who could teach me some of the more important principles, even though it still early in the World of Computers. I had to learn from books and magazines but more importantly, I had to learn by practicing! And that’s how you should start too. You need to practice your computer skills first.

But nowadays, you have one major advantage over me. You have the Internet and there are thousands of mentors willing to help you with anything you need. So, before you will learn to write programs, you need to know a few important resources.

The most important resource for all programmers is this simple search engine called Google. You probably know it already and it is used by quite a few people, although some people dislike Google and how powerful this company actually is. They might use Yahoo, Bing or even DuckDuckGo. That’s just fine but in my own experience, Google tends to provide the most relevant answers to your questions.

Google does this by profiling. They keep a history of things you’ve searched for and if you use Google Chrome as web browser, they will most likely have a good idea of your interests. If you also have a GMail or Google Apps for Work account then they will have an even better profile of you and your preferences. And yes, in a way that invades in your privacy! So you have to be smart in how you use Chrome and Google!

One simple way to get around this is by using multiple accounts. See this link on how to add more than one profile to Chrome. Make one account for any development-related stuff and make sure a Google account and mailbox is assigned to this profile. This is what you will use to let Google profile your development requests. Create a second account for anything that you want to keep more private and use whatever you like. This second account can avoid all Google products and can be used if you like to surf for porn or whatever else using Yahoo. Or to write on Facebook and read tweets on Twitter. But use at least two accounts in Chrome if you want to develop software!

While Google is profiling your development account, you will notice how all search results will start to relate more and more towards software development. This way, you take advantage of what many people consider an invasion of privacy. Just make sure that anything you do with the development account is development-related.

So, now you have the most powerful online resource available to you and as you continue to use it, it will learn more and more about the information you need. So, let’s focus on a few other resources.

An old resource is Experts Exchange, which has a bit unfortunate name. People tend to confuse it with Expert Sexchange, which happens when people forget the dash in the URL or put it in the wrong place. You don’t want to confuse those two! Anyways, EE has been a very useful resource for me for about 15 years now, although the knowledge-base of this site has become saturated. They also turned it into a paid website once, started serving advertisement and they have been losing members since new members had no free access to any answers, which annoyed many people. To get free access, you needed to get a minimum number of points per month by answering the questions of other members for which points could be awarded to you. It’s a good system and they’re continuously searching for new ways to get extra revenue. And they have a huge knowledge-base that they are continuously expanding. Still, there is a slightly better resource available.

This alternative is Stack Overflow, where membership is free, although not ad-free. When you answer questions on this site, you can receive points which increase your rank. When your rank increases, you gain more access rights on the whole system and can even become a moderator. And like EE, this site has already a huge knowledge-base. And this is just one of the many Stack Exchange sites! The makers of SO decided to use the same web software to set up various other sites for other topics and this received a lot of support, allowing them to keep maintaining the software and sites. With over 150 different communities and topics, this is one of the best resources online. But for developers, Stack Overflow should be enough.

Another good resource is Quora. But Quora is more generic and can have all kinds of topics. It is a good place to ask non-technical questions about software development, like what kind of chair you need, what the best setup for your monitor and mouse is or even what to do when someone copied your source code without permission.

And of course, there are many more resources online that you can use, but to start, download Google Chrome and set up that developer account. Then use the resources I’ve just mentioned and start searching for more resources that you think are useful. Add all these resources to your favorites and make sure your Google settings are set to synchronize your settings with Google. Why? Because it allows you to share your favorites with your developer account on multiple computers! Or to synchronize between your computer, your laptop, your tablet with Chrome and your Android phone…

You may have noticed that I talk about what programmers have to do and I have said nothing about programming yet! This is because you will need to prepare first. As a developer, the first thing you need to know is how to find the information you need.

One thing to keep in mind is that programming languages aren’t as important as they seem. A programming language is just a tool, like a hammer or screwdriver. You use it to create a product. For developers, that product is a piece of software. And for people who will use it, it just doesn’t matter how you created it as long as it just works as it is supposed to. So relax! Programming isn’t really about learning programming languages but about making products with your typing hands and brains! Just like a carpenter who makes furniture…

My history as a software developer

In the following posts I will speak about the things you need to learn to become a professional software developer. But before I start that series I first start talking about my own experiences.

I was born in 1966 so I’ve seen this world for about half a century already. And I’ve seen how computers developed from devices as large as a house to devices the size of my fingernails. I’ve seen how difficult it was to write code in the beginning, how standards started to become more popular and made things easier and how people started to complain about standards and, as XKCD makes clear, “solved” the problem by making more standards…

standards

And this is still continuing. It is interesting to see how people just continue to invent new standards, new languages and new formats just because they dislike the old ones. It is also annoying for developers because you will miss some job opportunities if you can’t use one or more of these standards.

But back about me! I was lucky, since my father happened to be a software developer back then, until the day he retired in 1994. He worked for a bank, which was called the “RijksPostSpaarbank”or RPS. (You could translate it to “State Mail Savings Bank”.) This was later known as just the “Postbank” (mail bank) and is now known as the ING. My dad actually was part of the “Postcheque- en Girodienst” (PCGD) which handled a lot of financial transactions.

The PCGD was one of the first fully automated Giro (order?) services in the world working with punched cards to automate a lot of stuff. Working with punched cards was quite fun back then and my father once brought a bag full of the punched-out paper to use as confetti for some party we were holding. That stuff was so nasty that when we moved out of our house in 1984, we could still find small pieces of this confetti in various locations. Still, fun stuff!

And yes, my interests in programming and computers have been inherited from my father. He once worked on one of the earlier Apple systems to help an uncle of mine to automate his bookkeeping and when he didn’t use it, I was allowed to play games on it and experiment a bit.

When he later bought a programmable calculator, the TI-58, I was allowed to use it for school and quickly learned to write simple programs for it, within the 50 instructions the memory allowed. And the first program I wrote myself was on this calculator and a piece of paper, for the Quadratic Formula! And yes, I needed it written down on paper since this calculator would lose the content of its memory once you turned it off so I had to reprogram it very often!

Later, he bought a ZX-81 for me to learn to use more about computer programming. He himself used it too for his bookkeeping but he found out it wasn’t as powerful as he’d hoped to. So I ended up to be the one to use it. Mostly for games but also for programming purposes. And by this time I had been programming various things already, reading magazines to learn even more.

This was around 1982 and my school also decided to teach about computers, so I got a simple education in programming through my school. Funnily enough, the school started with punched cards where we had to fill in the holes with a black pencil. They would then be sent to some university where a machine would stamp holes for all the black spots and would then run the code, returning a printed output of the results.

Compiling my code and getting the results back would often take two to three weeks. Which XKCD also documented:

compiling

Soon afterwards, we got the Sinclair QL which was a bit more serious computer. Back in those days it was fast and the two microdrives of each 100KB made storage of applications a lot easier. By then, I was also studying at the “Higher Laboratory School” to become a Lab Assistant, simply because this school offered an extra ICT education, teaching me how to use PAscal on a Minix system. It was the most interesting part of school while the rest sucked so after a year, I quit and went to look for a job as programmer or whatever else in the ICT. But since I’d learned Pascal, I had bought a PAscal compiler for the QL, which was quite rare back then.

I had one of the earlier PC’s, in my case the Tulip System PC Advance/Extend with a hard disk of 20 MB and 640 KB of RAM and an EGA video card. I had an illegal copy of Turbo Pascal 3.0 which I used for writing my own programs and I had a copy of NetHack, a fun game that I could play for hours.

Around 1987 I had the chance to get some AMBI (Dutch) modules. Part of this was learning to program in COBOL and a special Assembly language called EXAT. (Exam Assembly Language, specifically created for exams.) I also worked as an intern for 6 months at IBM Netherlands as a COBOL Developer, although my job didn’t involve as many programming tasks as I hoped for.

As my job as Intern, I was also given a first look at SQL and even got some kind of diploma indicating that I was good enough as an SQL developer in a time when SQL was just mostly restricted to the “SELECT” statement.

I’ve upgraded to newer computers several times, working all kinds of jobs and trying to get a job as Software Developer, meaning that I had to extend my skills. I got an illegal copy of Turbo Pascal 5 too but when Turbo Pascal 6 came out, I had the financial means to just purchase a license. Which I also did when Borland Pascal 7 arrived on the market. (It also supported Windows 3.11 development, which I was using back then. And around 1994 there was the first Borland Delphi version, which I purchased. And upgraded all the way until Delphi 2007.

The only Delphi version I bought after the 2007 edition was Delphi XE5. Problem was that Delphi was losing the battle against .NET and I needed to switch my skills. So in 2002 I started with the first Visual Studio compilers and I later would upgrade to the versions 2008, 2010, 2012, 2012 and now 2015. Why? Because since 2001, the .NET environment was gaining control over the Windows market and development jobs shifted from desktop applications to web development.

Nowadays, it shifts towards mobile development and embedded “Smart” devices and the “Internet of Things”.

Around 1993, I also got my first Linux distribution, which was interesting. It was difficult to get software for it, since the Internet was still under development back then and it was hard to find good sources and good documentation. Still, I managed to run Linux from a floppy disk and use the console for some simple things. But I would rarely use it until I started using Virtual Machines around 2002 using VMWare. Since then, I have created (and deleted) various virtual machines running some version of Linux. Also one or two versions of FreeBSD and once even Solaris.

And 1993 was also the year when I started to really focus on other languages. It was when the Internet started to rise and I had a dial-up connection through CompuServe. These were well-known back then since they provided a free CD for Internet access with almost every computer magazine back then. A lot of people had huge stacks of CompuServe CD’s at home, not knowing what to do with them, although they were great frisbees and also practical to scare off birds in your garden. But I used one to subscribe and kept using it until 2005, even though I had moved to Chello in 1998. (And Chello later became UPC and is now called Ziggo.)

Anyways, I had learned some Forth and a few other languages, tried my best with Perl, got some early experience with HTML and through Delphi I learned more about DBase IV and Paradox. (Both databases.) I also started to focus on C and C++, which was important since the Windows kernel was built in C and exposed methods to call to from Delphi that used the C calling methods and logic. The Windows API was well-documented for C developers but I had to learn to convert this C code to Pascal code so I could use the same libraries in Delphi.

With my copy of Borland Pascal 7 also came a copy of Turbo Assembly so I also focused on Assembly. I wrote a mouse driver in Assembly to use with Pascal on MS-DOS and when I worked as a software developer in 1994 for a company called Duware B.V. I also used Assembly to create a screen saver to use within the applications we created.

Applications that were created with Microsoft Basic Professional Development System 7.1, which technically was the latest version of QuickBasic before Microsoft created Visual Basic. Since my employer wanted to move from MS-DOS to Windows, he was also looking for a good programming language for Windows. My suggestion of Delphi was ignored because that meant my boss would need to learn PAscal, which he did not want to do. We also looked at Gupta SQL Windows, which seemed promising but when he hired a new employee who had PowerBuilder experience, he decided that we would move to PowerBuilder instead! This language was similar to the BASIC he knew and seemed a bit promising.

Still, when it took two days for two of his employees to make an animated button on a form and he allowed them to waste that much time on an unimportant feature, I realised that this job wasn’t very promising. For my boss, BASIC was like a Golden Hammer. And in my experience, you need to stay far away from people who use golden hammers since they think they’re always right and always have the right tool. What matters to them is that the problem must fit the tool, else the project needs to be changed to match.

Real developers realise that it’s the opposite! A tool must match the project, else you have to pick a different tool. What matters is that you have a large toolkit of programming languages and various techniques, APIs and frameworks that you’re familiar with so you can pick the right tool for each project.

And while I’ve been working as a Delphi developer for almost two decades, I have always focused on other languages too. I’ve done some projects partly in Assembly to speed some processes up. I’ve worked on C projects that needed to compile on various mainframes and Unix systems so I could only use the standard libraries. I’ve worked with techniques like ActiveX, COM, DCOM and COM+. I’ve created web pages in PHP that were served from a Delphi server application. I’ve written code in C++ whenever that was required. And since 2001 I also focused on .NET and specifically C# and ASP.NET for web development and web services. I’ve used Python, Perl, JavaScript and I’ve specialized in XML with style sheets and creating XML schemas. I even worked with ASN.1 for a project where I had to communicate with an external device that used a BER encoding standard.

And these days, my main focus is on Visual Studio 2015 with C# and C++, CLang, JavaScript and jQuery. I’m also learning more about electronics, writing C programs and libraries to use with an ATTiny 85 and other Atmel micro controllers to make my own hardware and to communicate with these self-made devices from e.g. my web server.

As a developer, it is a good thing to experiment with various electronic devices and micro controllers to hone your skills. It provides a better insight in hardware and how to communicate between devices. You will often have to consider techniques like WiFi, Bluetooth or Infrared communications and come up with proper protocols to send information between devices.

All in all, I have a varied experience with lots of hardware and software, I can manage my own web servers and am experienced with various operating systems like Windows, Linux, BSD and IOS. I am now focusing on embedded devices and Android/IOS development but I still keep all my skills up to date, including my Delphi knowledge next to C#. I need various tools in my toolbox, which is important for each and every software developer in this world.

And no, I don’t think that language X is better than language Y. Good developers care as much about programming languages as expert carpenters do about their hammers and screwdrivers. Because it is not the tool that matters, but the final product that you’re building!

The need of security, part 3 of 3.

Azra Yilmaz Poses III
Enter a caption

Do we really need to hash data? And how do we use those hashed results? That is the current topic.

Hashing is a popular method to generate a key for a piece of data. This key can be used to check if the data is unmodified and thus still valid. It is often used as an index for data but also as a way to store passwords. The hashed value isn’t unique in general, though. It is often just a large number between a specific range of values. If this range happens to be 0 to 9, it would basically mean that any data will result in one of 10 values as identifier, so if we store 11 pieces of data as hashes, there will always be two pieces of data that generate the same hash value. And that’s called collisions.

There are various hashing algorithms that are created to have a large numerical range to avoid collisions. Chances of collisions are much bigger in smaller ranges. Many algorithms have also been created to generate a more evenly distribution of hash values which further reduces the chance of collisions.

So, let’s have a simple example. I will hash a positive number into a value between 0 and 9 by adding all digits to get a smaller number. I will repeat this for as long as the resulting number is larger than 9. So the value 654321 would be 6+5+4+3+2+1 or 21. That would become 2+1 thus the hash value would be 3. Unfortunately, this algorithm won’t divide all possible hash values equally. The value 0, will only occur when the original value is 0. Fortunately, the other numbers will be divided equally as can be proven by the following piece of code:

Snippet

using System;
 
namespace SimpleHash
{
    class Program
    {
        static int Hash(int value)
        {
            int result = 0;
            while (value > 0)
            {
                result += value % 10;
                value /= 10;
            }
            if (result >= 10) result = Hash(result);
            return result;
        }
 
        static void Main(string[] args)
        {
            int[] index = new int[10];
            for (int i = 0; i < 1000000; i++) { index[Hash(i)]++; }
            for (int i = 0; i < 10; i++) { Console.WriteLine("{0}: {1}", i, index[i]); }
            Console.ReadKey();
        }
    }
}

Well, it proves it only for the values up to a million, but it shows that 999,999 of the numbers will result in a value between 1 and 9 and only one in a value of 0, resulting in exactly 1 million values and 10 hash values.

As you can imagine, I use a hash to divide a large group of numbers in 10 smaller groups. This is practical when you need to search for data and if you have a bigger hash result. Imagine having 20 million unsorted records and a hash value that would be between 1 and 100,000. Normally, you would have to look through 20 million records but if they’re indexed by a hash value, you just calculate the hash for a piece of data and would only have to compare 200 records. That increases the performance, but at the cost of maintaining an index which you need to build. And the data needs to be an exact match, else the hash value will be different and you would not find it.

But we’re focusing on security now and the fact that you need to have a perfect match makes it a perfect way to check a password. Because you want to limit the amount of sensitive data, you should not want to store any passwords. If a user forgets a password, it can be reset but you should not be able to just tell them their current password. That’s a secret only the user should know.

Thus, by using a hash, you make sure the user provides the right password. But there is a risk of collisions so passwords like “Wodan5tr1ke$Again” and “123456” might actually result in the same hash value! So, the user thinks his password is secure, yet something almost everyone seems to have used as password will also unlock all treasures! That would be bad so you need two things to prevent this.

First of all, the hash algorithm needs to provide a huge range of possible values. The more, the better. If the result happens to be a 256-bit value then that would be great. Bigger numbers are even more welcome. The related math would be more complex but hashing algorithms don’t need to be fast anyways. Fast algorithms actually speed up brute-force attacks so with hashing, slower algorithms are better. The user can wait a second or two. But for a hacker, two seconds per attempt means he’ll spent weeks, months or longer just to try a million possible passwords through brute force.

Second of all, it is a good idea to filter all simple and easy to guess passwords and use a minimum length requirement together with an added complexity requirement like requiring upper and lower case letters together with a digit and special character. And users should not only pick a password that qualifies for these requirements but also when they enter a password, these checks should be performed before you check the hash value for the password. This way, even if a simple password collides with one of the more complex ones, it still will be denied since it doesn’t match the requirements.

Use regular expressions, if possible, for checking if a password matches all your requirements and please allow users to enter special characters and long passwords. I’ve seen too many sites which block the use of special characters and only use the first 6 characters for whatever reason, thus making their security a lot weaker. (And they also tend to store passwords in plain-text to add to the insult!)

Security is a serious business and you should never store more sensitive data than needed. Passwords should never be stored anyways. Only hashes.

If you want to make even a stronger password check, then concatenate the user name to the password. Convert the user name to upper case, though. (Or lower case) so the user name is case-insensitive. Don’t do the same with the password, though! The result of this will be that the username and password together will result in a hash value, so even if multiple people use the same password, they will still have different hashes.

Why is this important? It is because some passwords happen to be very common and if a hacker knows one such password, he could look in the database for similar hashes and he would know the proper passwords for those accounts too! By adding the user name, the hash will be different for every user,  even if they all use the same password. This trick is also often forgotten yet is simple enough to make your security a lot more secure.

You can also include the timestamp of when the user registered their account, their gender or other fixed data that won’t change after the account is created. Or if you allow users to change their account name, you would require them to provide their (new) password too, so you can calculate the new hash value.

The need of security, part 2 of 3.

Azra Yilmaz Poses II
Enter a caption

What is encryption and what do we need to encrypt? That is an important question that I hope to answer now.

Encryption is a way to protect sensitive data by making it harder to read the data. It basically has to prevent that people can look at it and immediately recognize it. Encryption is thus a very practical solution to hide data from plain view but it doesn’t stop machines from using a few extra steps to read your data again.

Encryption can be very simple. There’s the Caesar Cipher which basically shifts letters in the alphabet. In a time when most people were illiterate, this was actually a good solution. But nowadays, many people can decipher these texts without a lot of trouble. And some can do it just inside their heads without making notes. Still, some people still like to use ROT13 as a very simple encryption solution even though it’s almost similar to having no encryption at all. But combined with other encryption methods or even hashing methods, it could be making encrypted messages harder to read, because the input for the more complex encryption method has already a simple layer of encryption.

Encryption generally comes with a key. And while ROT13 and Caesar’s Cipher don’t seem to have one, you can still build one by creating a table that tells how each character gets translated. Than again, even the mathematical formula can be considered a key.

Having a single key will allow secret communications between two or more persons and thus keep data secure. Every person will receive a key and will be able to use it to decrypt any incoming messages. These are called symmetric-key algorithms and basically allows communication between multiple parties, where each member will be able to read all messages.

The biggest problem of using a single key is that the key might fall into the wrong hands, thus allowing more people access to the data than originally intended. That makes the use of a single key more dangerous in the long run but it is still practical for smaller sessions between multiple groups, as long as each member has a secure access to the proper key. And the key needs to be replaced often.

A single key could be used by chat applications where several people will join the chat. They would all retrieve a key from a central environment and thus be able to read all messages. But you should not store the information for a long time.

A single key can also be used to store sensitive data into a database, since you would only need a single key to read the data.

A more popular solution is an asymmetric-key algorithm or public-key algorithm. Here, you will have two keys, where you keep the private (master) key and give others the public key. The advantage of this system is that you can both encrypt and decrypt data with one of the two keys, but you can’t use the same key to reverse that action again. Thus it is very useful to send data into a single direction. Thus the private key encrypts data and you would need the public key to decrypt it. Or the public key encrypts data and you would need the private key to decrypt it.

Using two keys thus limits communication to a central hub and a group of people. Everything needs to be sent to the central hub and from there it can be broadcasted to the others. For a chat application it would be less useful since it means the central hub has to do more tasks. It needs to continuously decrypt and encrypt data, even if the hub doesn’t need to know the content of this data.

For things like email and secure web pages, two keys is practical, though. The mail or web server would give the public key to anyone who wants to connect to it so they can encrypt sensitive data before sending it to the server. And only the server can read it by using the private key. The server can then use the private key to encrypt new data and send it to the visitor, who will use the public key to decrypt the message again. Thus, you have secure communications between two parties.

Both methods have some very secure algorithms but also some drawbacks. Using a single key is risky if that key falls into the wrong hands. One way to solve this is by sending the single key using a two-key algorithm to the other side! That way, it is transferred in a secure way, as long as the key used by the receiver is secret enough. In general, that key should need to be a private key so only the recipient can read the single-key you’ve sent.

A single key is also useful when encrypting files and data inside databases since it would only require one key for both actions. Again, you would need to store the key in a secure way, which would again use a two-key algorithm. You would use a private key to encrypt the single key and include a public key in your application to decrypt this data again. You would also use that public key inside your applications only but it would allow you to use a single public key in multiple applications for access to the same data.

As I said, you need to limit access to data as much as possible. This generally means that you will be using various different keys for various purposes. Right now, many different encryption algorithms are already in use but most developers don’t even know if the algorithm they use is symmetrical or asymmetrical. Or maybe even a combination of both.

Algorithms like AES, Blowfish and RC4 are actually using a single key while systems like SSH, PGP and TLS are two-key algorithms. Single-key algorithms are often used for long-term storage of data, but the key would have additional security to avoid easy access to it. Two-key algorithms are often used for message systems, broadcasts and other forms of communication because it is meant to go into a single direction. You don’t want an application to store both a private key and matching public key because it makes encryption a bit more complex and would provide a hacker a way to get the complete pair.

And as I said, a single key allows easier communications between multiple participants without the need for a central hub to translate all messages. All the hub needs to do is create a symmetrical key and provide it to all participants so they can communicate with each other without even bothering the central hub. And once the key is deleted, no one would even be able to read this data anymore, thus destroying almost all traces of the data.

So, what solution would be best for your project? Well, for communications you have to decide if you use a central hub or not. The central hub could archive it all if it stays involved in all communications, but you might not always want this. If you can provide a single key to all participants then the hub won’t be needed afterwards.

For communications in one single direction, a two-key algorithm would be better, though. Both sides would send their public key to the other side and use this public key to send messages, which can only be decrypted by the private key which only one party has. It does mean that you actually have four keys, though. Two private keys and two public keys. But it happens to be very secure.

For data storage, using a single key is generally more practical, since applications will need this key to read the data. But this single key should be considered to be sensitive thus you need to encrypt it with a private key and use a public key as part of your application to decrypt the original key again.

In general, you should use encryption whenever you need to store sensitive data in a way that you can also retrieve it again. This is true for most data, but not always.

In the next part, I will explain hashing and why we use it.

The need of security, part 1 of 3.

Azra Yilmaz Poses I
Enter a caption

Of all the things developers have to handle, security tends to be a very important one. However, no one really likes security and we rather live in a society where you can leave your home while keeping your front door open. We generally don’t want to deal with security because it’s a nuisance!

The reality? We lock our doors, afraid that someone gets inside and steal things. Or worse, waits for us to return to kill us. We need it to protect ourselves since we’re living in a world where a few people have very bad intentions.  And we hate it because security costs money, since someone has to pay for the lock. And it takes time to use it, because locking and unlocking a door is still an extra action you need to take.

And when you’re developing software, you generally have the same problem! Security costs money and slows things down a bit. And it is also hard to explain to a client why they have to pay for security and why the security has to cost so much. Clients want the cheapest locks, yet expect their stuff is as safe as Fort Knox or even better.

The worse part of all security measures is that it’s never able to keep everyone out. A lock on your door won’t help if you still leave the window open. And if the window is locked, it is still glass that can be broken. The door can be kicked in too. There are always a lot of ways for the Bad People to get inside so what use is security anyways?

Well, the answer is simple: to slow down any would-be attacker so he can be detected and dealt with, and to make the break-in more expensive than the value of the loot stored inside. The latter means that the more valuable the loot is, the stronger your security needs to be. Fort Knox contains very valuable materials so it has a very strong security system with camera’s and lots of armed guards and extremely thick walls.

So, how does this all translate to software? Well, simple. The data is basically the loot that people are trying to get at. Legally, data isn’t property or doesn’t even has much legal protection so it can’t be stolen. However, data can be copyrighted or it can contain personal information about people. Or, in some cases, the data happens to be secrets that should not be exposed to the outside world. Examples of these three would be digital artwork, your name and bank account number or the formula for a deadly poison that can be made from basic household items.

Of all this data, copyrighted material is the most common item to protect, and this protection is made harder because this material is meant to be distributed. The movie and music industry is having a very hard time protecting all copyrighted material that they have and the same applies to photographers and other graphical artists. But also software developers. The main problem is that you want to distribute a product in return for payment and people are getting it without paying you. You could consider this lost profit, although if people had no option but pay for your product, they might not have wanted it in the first place. So the profit loss is hard to prove.

To protect this kind of material you will generally need some application that can handle the data that you’re publishing. For software, this would be easy because you would include additional code to your project that will check if the software has been legally installed or not. Often, this includes a serial number and additional license information and nowadays it tends to include calling a special web server to check if licenses are still valid.

For music and films, you can use a technique called DRM which works together with proper media players to make additional checks to see if the media copy happens to be from a legal source or not. But it would limit the use of your media to media players that support your DRM methods. And to get media players to support your DRM methods, you need to publish those methods and hope they’re secure enough. But DRM has already been bypassed by hackers many times so it has proven to be not as effective as people hoped.

And then there’s a simpler option. Add a copyright notice to the media. This is the main solution for artwork anyways, since there’s no DRM for just graphic images. You might make the image part of an executable but then you have to build your own picture viewer and users won’t be able to use your image. Not many people want to just see images, unless it is pornography. So you will have to support the basic image file formats, which are generally .JPG or .PNG for any image on the Internet. Or .GIF for animations. And you protect them by adding a warning in the form of a copyright notice. Thus, if someone is misusing your artwork and you discover the use of your art without a proper license, then you can start legal actions against the violator and claim damages. This would start by sending a bill and if they don’t pay, go to court and have a judge force them to pay.

But media like films, music and images tend to be hard to protect and often require going to court to protect your intellectual property. And you won’t always win such cases either.

Next on the list is sensitive, personal information. Things like usernames and passwords, for example. One important rule to remember is that usernames should always be encrypted and passwords should always be hashed. These are two different techniques to protect data and will be explained in the next parts.

But there is more sensitive data that might need to be stored and which would be valuable. An email address could be misused to spam people so that needs to be encrypted. Name, address and phone numbers can be used to look up people and annoy those people by ordering stuff all over the Internet and have it sent to their address. Or to make fake address changes to change their address to somewhere else, so they won’t receive any mail or other services. Or even to visit the address, wait until the people left the house and then break in. And what has happened in the past with addresses of young children is that a child molester learns of their address and goes to visit them to rape and/or kidnap them. So, this information is also sensitive and needs to be encrypted.

Other important information would be bank account information, medical data and employment history would be sensitive enough to have encrypted. Order information from visitors might also be sensitive if the items were expensive since those items would become interesting things to steal. You should basically evaluate every piece of information to determine if it needs to be encrypted or not. In case of doubts, encrypt it just to make it more secure.

Do keep in mind that you can often generate all kinds of reports about this personal data. A simple address list of all your customers, for example. Or the complete medical file of a patient. These documents are sensitive too and need to be protected, but they’re also just basic media like films and artwork so copies of those reports are hard to protect and often not protected by copyrights. So be very careful with report generators and have report contain warnings about how sensitive the data in it actually is. Also useful is to have a cover page included as the first page of a report, in case people will print it. The cover page would thus cover the content if the user keeps it closed. It’s not much protection but all small bits are useful and a cover page prevents easy reading by passer-by’s of the top page of the report.

Personal information is generally protected by privacy laws and thus misuse of personal information is often considered a criminal offense. This is unlike copyright violations, which are just civil offenses in general. But if you happen to be a source of leaking personal information, you and your company could be considered guilty of the same offense and will probably be forced to pay for damages and sometimes a large fine in case of clear negligence in protecting this data.

The last part of sensitive data tends to be ideas, trade secrets and more. In general, these are just media files like reports and thus hard to protect, although there are systems that could store specific data as personal data so you can limit access to it. Ideas and other similar data are often not copyrightable. You can’t get copyright on an idea. You can only get copyright over the document that explains your idea but anyone who hears about your idea can just use it. So if you find a solution for cleaner energy, anyone else could basically build your idea into something working and make profits from it without providing you any compensation. They don’t even have to say it was your idea!

Still, to protect ideas you can use a patent, which you will have to register in many countries just to protect your idea everywhere. Patents become open to the public so everyone will know about it and be able to use it, but they will need to compensate you for using your idea. And you can basically set any price you like. This system tends to be used by patent trolls in general, since they describe very generic ideas and then go after anyone who seems to use something very similar to their idea. They often claim an amount of damages that would be lower than the legal amount it would cost the accused to fight back, so they tend to get paid for this trolling. This is why many are calling for patent reforms to stop these patent trolls from abusing the system.

So, ideas are very sensitive. You generally don’t want to share them with the generic public since it would allow others to implement your ideas. Patents are a bit expensive and not always easy to protect. And you can’t patent everything anyways. Some patents will be refused because they’ve already been patented before. And yet you still need to share them with others so you can build the idea into a project. And for this, you would use a non-disclosure agreement or NDA.

An NDA is basically a contract to make sure you can share your idea with others and they won’t be allowed to share it with more people without your permission. And if your idea does get leaked, those others would have to compensate your financial losses due to leakage as mentioned in the NDA contract. It’s not very secure but it generally does prevent people from leaking your ideas.

Well, except for possible whistleblowers who might leak information about any illegal or immoral parts of your idea. For example, if your idea happens to be to blow up the subway in Amsterdam and have an NDA with a few other terrorists to help you then it becomes difficult when one of those others just walks to the police to report you and those who help you. The NDA just happens to be a contract and can be invalidated for many reasons, including the more obvious criminal actions that would relate to it.

But there are also so-called blacklists of things you can’t force in an NDA, depending on the country where you live. It is just a contract and thus handled by the Civil courts. And if the NDA violates the rights of those who sign it then it could be invalid. One such thing would be the right of free speech, where you would ban people from even discussing if your idea happens to be good or not.

Other sensitive information would be things like instructions on how to make explosives or business information about the future plans of Intel, which could influence the stock market. Some of this information could get you into deep trouble, including the Civil Court or Criminal Court as part of your troubles, resulting in fines and possibly imprisonment if they are leaked.

In general, sensitive information isn’t meant to be shared with lots of people so you should seriously limit access to such information. It should not be printed and you should not email this information either. The most secure location for this information would be on a computer with no internet connection but having a strong firewall that blocks most access methods would be good enough for many purposes.

So we have media, which is hard to protect because it is meant to be published. We have sensitive data which should not fall in the wrong hands for various reasons and we have personal data, which is basically a special case of sensitive data that relates to people and thus has additional laws as protection.

And the way to secure it is by posting warnings and limiting access to the data, which is difficult if it was meant to be published. But for those data that we want to keep private, we have two ways of protecting it next to limiting the physical access to this data.

To keep things private, you will need to have user accounts with passwords or other security keys to lock the data and limit access to it. And these user accounts are already sensitive data so you should start with protecting it here, already.

Of all the things software developers do, security happens to be the most complex and expensive part, since it doesn’t provide any returns on investments made. All it does is try to provide assurance that data will only be available for those who are meant to use it.

The two ways to protect data is through encryption and through hashing, which are two similar things, yet also differ in their purpose. I will discuss both in my next posts.