An example of bad development…

I recently received an email from a company that’s doing questionnaires. And well, I subscribed to this and did some of their questionnaires before, so I wanted to do this new one too. Unfortunately, the page loaded quite slow, only to return a very nasty error message. A message that told me that this organisation is using amateurs for developers and administrators.

Let me be clear about one thing: errors will happen. Every developer should expect weird things to happen, but this case is not an error but evidence of amateurs. So, let’s start with analyzing the message…

Server Error in ‘/’ Application.

Timeout expired.  The timeout period elapsed prior to obtaining a connection from the pool.  This may have occurred because all pooled connections were in use and max pool size was reached.

So, what’s wrong with this? Users should never see these messages! When you develop in ASP.NET you can just tell the system to just keep these error messages only when the user is connected locally. A remote user should see a much simpler message.

This is something the administrator of the website should have known, and checked. He did not. By failing at this simple configuration setting the organisation is leaking some sensitive information about their website. Information that’s enough for me to convince they’re amateurs.

This error is also a quite common error message. Basically, it’s telling me that the system is having too many database connection open. One common cause for this error is when the code fails to close a connection after opening them. Keep that in mind, because I will show that this is what caused the error…

Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.

This is a standard follow-up message. The fact that users of the site would see this stack trace too is just bad.

Exception Details: System.InvalidOperationException: Timeout expired.  The timeout period elapsed prior to obtaining a connection from the pool.  This may have occurred because all pooled connections were in use and max pool size was reached.

A timeout error. A reference to the connection pool and the max pool size. This already indicates that there are more connections are opened than closed and the system can’t handle that correctly. There are frameworks for .NET that are better suited for this to prevent these kinds of errors. That’s because these errors happened to be very common with ASP.NET applications. And with generic database applications written in .NET.

Basically, the top of the error message is just repeating itself. Blame Microsoft for that since this is a generic message from ASP.NET itself. Developers can change the way it looks but that’s not very common. Actually, developers should prevent users from seeing these kinds of messages to begin with. Preferably, the error should be caught by an exception handler which would write it to a log file or database and send an alert out to the administrator.

Considering that I received this error on a Friday afternoon, I bet the developer and administrators are already back home, watching television like I do now. Law & Order is just on…

Source Error:

Line 1578:
Line 1579:        cmSQL = New SqlCommand(strSQL, cnSQLconfig)
Line 1580:        cnSQLconfig.Open()
Line 1581:
Line 1582:        Try

This is interesting… The use of SqlCommand is a bit old-fashioned. Modern developers would have switched to e.g. the Entity Framework or other, more modern solution for database access. But the developers of this site are just connecting to the database in code, probably to execute a query and collect the data and then should close the connection again. The developers are clearly using ADO.NET for this site. And I can’t help but wonder why. They could have used more modern techniques instead. But probably they just need to keep up an existing site and aren’t they allowed to use more modern solutions.

But it seems to me that closing the database is not going to happen here. There are too many connections already open thus this red line of code fails. The code has an existing connection called cnSQLConfig which is already open. It then tries to open and execute an SQL command that fails. Unfortunately, opening that command happens outside a try-except block and if this fails, it is very likely that the connection won’t be closed either.

If this happens once or twice, then it still would not be a big problem. The connection pool is big enough. But here it just happened too often.

Another problem is that the ADO.NET technique used here is also vulnerable for SQL Injection. This would also be a good reason to use a different framework for database access. It could still be that they’re using secure code to protect against this but what I see here doesn’t give me much confidence.

Source File: E:\wwwroot\beta.example.com\index.aspx.vb    Line: 1580

A few interesting, other facts. First of all, the code was written in Visual Basic. That was already clear from the code but this just confirms it. Personally, I prefer C# over Visual Basic, even though I’ve developed in both myself. And in a few other languages. Language should not matter much, especially with .NET, but C# is often considered more professional than BASIC. (Because the ‘B’ in BASIC stands for ‘Beginners’.)

Second of all, this piece of code has over 1580 lines of code. I don’t know what the rest of the code is doing but it’s probably a lot of code. Again, this is an old-fashioned way of software development. Nowadays, you see more usage of frameworks that allow developers to write a lot less code. This makes code more readable. Even in a main index of a web site, the amount of code should be reasonable low. You can use views to display the pages, models to handle the data and controllers to connect both.

Yes, that’s Model-View-Controller, or MVC. A technique that’s practical in reducing the amount of code, if used well enough.

And one more thing is strange. While I replaced the name of the site with ‘example.com’, I kept the word ‘beta’ in front of it. I, a user, am using a beta-version of their website! That’s bad. Users should not be used as testers because it will scare them off when things go wrong. Like in this case, where the error might even last the whole weekend because developers and administrators are at home, enjoying their weekend.

Never let users use your beta versions! That’s what testers are for. You can ask users to become testers, but then users know they can expect errors like these.

Stack Trace:

[InvalidOperationException: Timeout expired.  The timeout period elapsed prior to obtaining a connection from the pool.  This may have occurred because all pooled connections were in use and max pool size was reached.]
   System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) +4863482
   System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) +117
   System.Data.SqlClient.SqlConnection.Open() +122
   _Default.XmlLangCountry(String FileName) in E:\wwwroot\beta.example.com\index.aspx.vb:1580
   _Default.selectCountry() in E:\wwwroot\beta.example.com\index.aspx.vb:1706
   _Default.Page_Load(Object sender, EventArgs e) in E:\wwwroot\beta.example.com\index.aspx.vb:251
   System.Web.UI.Control.OnLoad(EventArgs e) +99
   System.Web.UI.Control.LoadRecursive() +50
   System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +627

And that’s the stack trace. We see the site loading its controls and resources and the ‘Page_Load’ method is called at line 251. At line 1706 the system is apparently loading country-information which would be needed to set the proper language. Then it returns to line 1580 where it probably opens some table based on information from the language file.

Again, this is a lot of code for basically loading the main page. I even wonder why it needs to load data from the database based on the country information. Then again, I was about to fill in a questionnaire so it probably wanted to load the questionnaire in the proper language. If the questionnaire is multi-lingual then that would make sense.

Version Information: Microsoft .NET Framework Version:2.0.50727.3655; ASP.NET Version:2.0.50727.3658

And here’s one more bad thing. This site still uses .NET version 2.0 while the modern version is 4.5 and we’re close to version 5.0… It would not surprise me if these developers still use Visual Studio 2005 or 2008 for this all. If that’s the case then their budget for development is probably quite low. I wonder if the developers who are maintaining this site are even experts at software development. It’s not a lot of information that I can base this upon but in short:

  • The administrator did not prevent error messages to show up for users.
  • The use of ADO.NET adds vulnerabilities related to the connection pool and SQL injection.
  • The use of VB.NET is generally associated to less experienced developers.
  • The amount of code is quite long but common for sites that are developed years ago.
  • Not using a more modern framework makes the site more vulnerable.
  • Country information seems to be stored in XML while the questionnaire is most likely stored inside the database.
  • The .NET version has been out-of-date for a few years now.

My advice would be to just rewrite the whole site from scratch. Use the Entity Framework for the database and MVC 4 for the site itself. Rewrite it in C# and hire more professional developers to do the work.

A very generic datamodel.

I’ve come up with several projects in the past and a few have been mentioned here before. For example, the Garagesale project which was based on a system I called “CART”. Or the WordChain project that was a bit similar in structure. And because those similarities, I’ve been thinking about a very generic datamodel that should be handled to almost any project.

The advantage of a generic database is that you can focus on the business layer while you don’t need to change much in the database itself. The datamodel would still need development but by using the existing model, mapping to existing entities, you could keep it all very simple. And it resulted in this Datamodel:ClassDiagram(Click the image to see a bigger version.)

The top class is ‘Identifier’ which is just an ID of type GUID to find the records. Which will work fine in derived classes too. Since I’m using the Entity Framework 6 I can just use POCO to keep it all very simple. All I have to do is define a DBContext that tells me which tables (classes) I want. If I don’t create an entry for ‘Identifier’, the table won’t be created either.

The next class is the ‘DataContent’ class, which can hold any XML. That way, this class can contain all information that I define in code without the need to create new tables. I also linked it to a ‘DataTemplate’ class which can be used to validate the content of the XML with an XML schema or special style sheet. (I still need to work out how, exactly.) The template can be used to validate the data inside the content.

The ‘BaseItem’ and ‘BaseLink’ classes are the more important here. ‘BaseItem’ contains all fixed data within my system. In the CART system, this would be the catalog. And ‘BaseLink’ defines transactions of a specific item from one item to another. And that’s basically three-fourth of the CART system. (The template is already defined in the ‘DataTemplate’ class.)

I also created two separate link types. One to deal with fixed numbers which is called ‘CountLink’ which you generally use for items. (One cup, two girls, etc.) The other is for fractional numbers like weights or money and is called ‘AmountLink’. These two transaction types will be the most used transaction types, although ‘BaseLink’ can be used to transfer unique items. Derived links could be created to support more special situations but I can’t think of any.

The ‘BaseItems’ class will be used to derive more special items. These special items will define the relations with other items in the system. The simplest of them being the ‘ChildItem’ class that will define more information related to a specific item. They are strongly linked to the parent item, like wheels on a car or keys on a keyboard.

The ‘Relation’ class is used to group multiple items together. For example, we can have ‘Books’ defined as relation with multiple book items linked to it. A second group called ‘Possessions’ could also be created to contain all things I own. Items that would be in both groups would be what is in my personal library.

A special relation type is ‘Property’ which indicates that all items in the relation are owned by a specific owner. No matter what happens with those items, their owner stays the same. Such a property could e.g. be a bank account with a bank as owner. Even though customers use such accounts, the account itself could not be transferred to some other bank.

But the ‘Asset’ class is more interesting since assets are the only items that we can transfer. Any transaction will be about an asset moving from one item to another. Assets can still be anything and this class doesn’t differ much from the ‘BaseItem’ class.

A special asset is a contract. Contracts have a special purpose in transactions. Transactions are always between an item and a contract. Either you put an asset into a contract or extract it from a contract. And contracts themselves can be part of bigger contracts. By checking how much has been sent or received to a contract you can check if all transactions combined are valid. Transactions will have to specify if they’re sending items to the contract or receiving them from the contract.

The ‘BaseContract’ class is the more generic contract type and manages a list of transactions. When it has several transactions, it is important that there are no more ‘phantom items’. (A phantom item would be something that’s sent to the contract but not received by another item, or vice versa.) These contracts will need to be balanced as a check to see if they can be closed or not. They should be temporary and last from the first transaction to the last.

The ‘Contract’ type derived from ‘BaseContract’ contains an extra owner. This owner will be the one who owns any phantom items in the contract. This reduces the amount of transactions and makes the contract everlasting. (Although it can still be closed.) Balancing these contracts is not required, making them ideal as e.g. bank accounts.

Yes, it’s a bit more advanced than my earlier CART system but I’ve considered how I could use this for various projects that I have in mind. Not just the GarageSale project, but also a simple banking application, a chess notation application, a project to keep track of sugar measurements for people with diabetics and my WordChain application.

The banking application would be interesting. It would start with two ‘Relation’ records: “Banks” and “Clients”. The Banks relation would contain Bank records with information of multiple banks. The Clients relation would contain the client records for those banks. And because of the datamodel, clients can have multiple banks.

Banks would be owners of bank accounts, and those accounts would be contracts. All the bank needs to do is keep track of all money going in our out the account. (Making money just another item and all transactions will be of type ‘AmountLink’.) But to link those accounts to the persons who are authorized to receive money from the account, each account would need to be owner of a Property record. The property record then has a list of clients authorized to manage the account.

And we will need six different methods to create transactions. Authorized clients can add or withdraw money from the account. Other clients can send or receive payments from the account, where any money received from the contract needs to be authorized. Finally, the bank would like to have interest, or pays interest. (Or not.) These interest transactions don’t need authorization from the client.

The Chess Notation project would also be interesting. It would start with a Board item and 64 squares items plus a bunch of pieces assets. The game itself would be a basic contract without owner. The Game contract would contain a collection of transactions transferring all pieces to their first locations. A collection of ‘Move’ contracts would also be needed where the Game Contract owns them. The Move would show which move it is (including branches of the game) and the transactions that take place on the board. (White Rook gone from A1, White Rook added to A4 and Black pawn removed from A4, which translates into rook takes pawn at A4.)

It would be a very complex way to store a chess game, but it can be done in the same datamodel as my banking application.

With the diabetes project, each transaction would be a measurement. The contract would be owned by the person who is measuring his or her blood and we don’t need to send or receive these measurements, just link them to the contract.

The WordChain project would be a bit more complex. It would be a bunch of items with relations, properties and children. Contracts and assets would be used to support updates to the texts with every edit of a WordChain item kicking the old item out of the contract and adding a new item into the contract. That would result in a contract per word in the database.

A lot of work is still required to make sure it works as well as I expect. It would not be the most ideal datamodel for all these projects but it helps me to focus more on the business layer and the GUI without worrying about any database changes. Once the business model becomes more advanced, I could create a second data layer with a better datamodel to improve the performance of the data management.

 

 

 

Is XML in decline?

I happen to be one of those older software developers who saw the rise of XML. I even remember the older SGML standard, although I never used SGML. Version 1.0 of XML became an official standard in 1998. Once it became a standard, many companies started working to create the Killer App to work with XML without much of a hassle. And although at first many companies started to create their own XML parsers, not all of them were completely conform the standard. Those parsers disappeared fast enough too.

Right now, version 1.1 of XML is the latest standard. Yes, in 16 years not much has happened to this standard. And the changes that have been applied are more about supporting EBCDIC platforms and the newer Unicode definitions. There are discussions about a version 2.0 but it’s not likely to become a standard soon. Strange as it might sound, XML seems to be in decline if you look at how it’s used.

The power of XML was, of course, in the way how you defined these files and how you could do transformations on these file types. While we used DTD definition files at first to define the structure of an XML file, some smart people came up with the XSD schema format, which allowed more flexibility and is by itself an XML file. Combined with some nice, graphical tools, the XSD made it easier to define an XML file and to validate if an XML file conforms to the proper structure. And I’ve made plenty of XSD files between 2000 and 2010 since my work required a lot of XML data exchanges.

Of course, transformations are also important and here we use stylesheets. An XSLT file would be made in XML itself and define how you would convert an XML file to some other output format. In general, this output would be another XML file, an HTML document to display it in a web browser, a simple text file or even a comma-separated file. And in some special cases it could even create a complete rich text document that you could open in Word. This meant that you could e.g. send an XML file to a server and the server would then process it. It would validate the file with a schema and could do additional validation tools by using a style sheet. If it passed these validation style sheets, other stylesheets could then be used to extract data from the XML and send it to other servers for further processing, while it could also generate documentation to return to the user. You could do a lot of processing with just XML files.

Of course, XML also became popular because more developers started to create web services. And they used the SOAP protocol for this, which is a slightly complex protocol that’s heavily dependant on XML standards. Since SOAP also had some build-in version mechanism, you could always make sure if the client was still using the right SOAP definitions or not. You could even use several SOAP message formats on the same system with only the version number as difference. It wasn’t easy to set up, but it worked extremely well.
And more has been developed to support XML even more. The XPath expressions would allow you to point to specific elements within an XML document. With XQuery, you could execute queries on XML files and process the result. With namespaces you could even combine multiple XML definitions that uses similar entities. And then we have things like XLink, XPointer and XForms, which never have been very popular.

Between 2000 and 2010, it seemed that XML would be a dominating development technique. No more writing code in other programming languages that needed to be compiled, simply because XML happens to be a fast scripting environment. Many platforms started to have a standard for objects that could process XML files and knowledge of XML became a hard-needed requirement for developers. So, what changed?

Well, many developers consider the XML format a bit bulky, especially because tags are often used twice. Once to open the element and once to close it. Thus, if an element is called ‘NumberOfElements‘ then you have to write <NumberOfElements>10</NumberOfElements> and that’s a lot of text to store the number 10. As a result, some developers would then shorten those tag names so the resulting XML would be smaller. If you have 10,000 of these tags in your XML file, shortening it to TOE would save 26 characters per element, thus 260,000 characters in total. This doesn’t seem much but developers feel they gain more by these kinds of optimizations. With modern multi-core processors and systems with 8 or more GB of RAM, such optimizations might make the code half a second faster, which you barely notice with web services, but still… Developers think it saves a lot. And yes, when resources are truly limited, it makes a lot of sense but modern mentalities are that companies will just add a second server if one is too slow. Or more, if need be. This is because the costs of the more hardware is less expensive than the costs of having developers optimize the code even further.

These kinds of optimizations make XML files less human-readable while the purpose was to make this kind of data more readable. It becomes slightly worse when the XML file uses namespaces, since those namespaces are also shortened to just a few letters.

Another problem is the need to parse XML to extract the data. More and more companies are creating web applications that run within web browsers and heavily rely on JavaScript. These apps need to be able to run on multiple devices too. Unfortunately, not all browsers support parsing XML files and even those who do are a bit complex to use. With regular expressions it’s still possible to extract some data from the XML but if you need to fill a grid with 50 rows and 20 columns, things become real complex. And to solve this, developers started to send data to web applications as JavaScript instead of XML. This could then be executed and thus the data would load itself into memory. Since JavaScript objects are less bulky than the begin/end tags of XML elements, it made this new format very practical and thus JSON was born.

The birth of JSON also demanded a change in web services. Since web applications would call these services directly, it would be very clumsy if they have to set up SOAP messages and then parse the SOAP results. A newer, simpler style of web services arose, which uses the REST protocol. Of course, there are many other web service protocols but REST seems to become the new standard. Especially because it’s a simpler protocol that relies on the HTTP(s) protocol.

Of course, web applications have become more important these days because we’re getting more and more devices with all kinds of different operating systems, which all have web browsers. And, as I said, not all of those devices have a native XML parser built-in. They do support JavaScript though, and as a result it becomes quite easy to develop web applications for all devices which use data in JSON formats.

Of course, many devices also allow special platform-dependant apps that can be created with development tools for their specific platforms. For OS X and iOS-based devices you would use Objective C while you would use C++ or Java for Android devices. (Java is the preferred development platform for Android.) For Windows RT you would use .NET for Metro-style applications with either VB or C# as primary language. This makes it a bit difficult to develop software that runs on all three devices but there are several parties who have created compilers that will compile platformdependent executables from platform-independent code. Unfortunately, working with XML parsers still differs on all these platforms and those third-party compilers need to wrap their parsers around the built-in parsers of the underlying platform. That makes them a bit slow.

Since the number of operating systems have risen since the market starts getting more and more new devices, it becomes more difficult to keep a single standard that’s supported by all those systems. And the XML standard is quite complex so the different parsers might not all support the same things. In that regard, JSON is much simpler since these are just simple assignment statements. And these assignment statements are based on the Java syntax, which also happens to be similar to the C++, C# and Objective C syntax. The only difference with these languages is the fact that JSON puts the field names between quotes too, which you can’t do inside these languages.

So, XML is becoming less useful because it requires too much work to use. JSON makes data serialization simpler and is less bulky. Especially when developers are more focussing on web applications and apps for specific devices, the use of XML is in decline in favor of JSON and other solutions. But there’s one more reason why XML is in decline. And this is something within the .NET framework that’s called LINQ.

LINQ was implemented as a separate library for .NET version 3.5 but has become popular since then. Basically, LINK allows you to support data in a structured object and use simple queries to, or to execute transformations on extract data from those objects. This would be similar to XPath and XSLT but now it’s part of your development language, allowing you more choice in functions that you can apply to the data. This is especially important for date fields, since XML doesn’t work well with date formats. LINQ actually makes extracting data from object trees quite easy and can be used on an XML document if you’ve read this document in memory in a proper XDocument or XmlDocument object. Thus, the need for XSLT to transform data has disappeared since you can do the same in C#, VB, F# or Oxygene.

The result is that .NET developers don’t have to learn about XML anymore. Their .NET knowledge combined with LINQ is more than enough. Since .NET also allows serialization to and from XML formats, it’s also quite easy to read and write XML files in .NET. You can import an existing XSD file into your .NET application and have it converted to code, but since most XML data starts as objects that need to be stored in XML before serialization, you will often see that developers just define the objects and include attributes to tell if the object and its fields are elements or attributes, and have the serialization library use these object definitions to serialize it to and from XML. Thus, knowledge of XML schemas is not a requirement anymore.

Because .NET development made the dependency on XML knowledge almost obsolete, the popularity of XML is in decline. It’s still used quite often, but the knowledge that you need to do practical things with XML with XML tools is disappearing. And similar things are happening on other platforms. Java and PHP also started supporting LINQ queries. And, as a result, those environments can work on structured objects instead of XML data. Thus, XML is only needed if the data needs to be sent to some other process and even then, other formats might be chosen too.

In fact, many developers are less concerned about the data format that’s used for inter-process communication. The system is handling this for them and they just use a specific serialization library that does the bulk of the work for them. XML isn’t really declining, but less developers need knowledge about the XML format since development tools have nice wrappers around them that allow these developers to use XML without even realizing they’re using XML. It’s not XML that’s in decline. It’s the knowledge about XML that is in decline…

Motivating developers…

One of the biggest problems for software developers is finding the proper motivations to sit behind the screen for 8 hours per day, designing and developing new code, new projects. It’s generally boring work that requires a lot of mental efforts. And the rewards tend to be just more of the same work the next day, and the day afterwards. Creating new code or fixing existing code is like working in a factory in an assembly line, just placing a lid on a pot which someone else will close, over and over and over.

But developing code is a mental job, unlike adding lids to pots. During physical jobs, your mind can wander around to what you’re going to do in the weekend, what’s on television or whatever else you have on your mind. A mental job makes that very difficult since you can’t think about your last holiday while also thinking about how to solve this bug. And thus developers have a much more complex job than those at the assembly line. A job that causes a lot of mental fatigue. (And sitting so long behind a screen is also a physical challenge.)

Three things will generally motivate people. Three basic things, actually, that humans have in common with most animals. We all like a good night of sleep, we all like to eat good food and we’re all more or less interested in sex. Three things that will apply for almost anyone. Three things that an employer might help with.

First of all, the sleep. Developers can be very busy both at home and at work with their jobs. Many of them have a personal interest in their own job and can spend many hours at home learning, playing or even doing some personal work at their own computers. Thus, a developer might start at 8:30 and work until 17:00. The trip home, dinner and meet and greet with the family will take some time but around 19:30 the developer will be back online on Facebook and other social media, play some online games or study new things. This might go on until well past midnight before they go to bed. Some 6 hours of sleep afterwards, they get up again, have breakfast, read the morning paper and go back to work again.

But a job that is mentally challenging will require more than 6 hours of sleep per day. So you might want to tell your employees to take well care of themselves if you notice they’re up past midnight. You need them well-rested else they’re less productive. Even though those developers might do a great job, they could improve even more if they take those eight hours of sleep every day. And as an employer you can help by allowing employees to visit social sites during work hours since it will help them relax. It lowers the need to check those sites while they’re at home. The distraction of e.g. Facebook might actually even improve their mental skills because it relaxes the mind.

The second motivation is food. Employers should consider providing free lunches to their employees. Preferably sharing meals all together in a meeting room or even a dinner room. Have someone do groceries at the local supermarket to get bread, spread, cheese, butter, milk, soda’s and other drinks and other snacks. While it might seem a waste of the money spent on those groceries, the shared meal will increase moral, allow employees to have all kinds of discussions with one another and increases the team building. It also makes sure everyone will have lunch at the same moment, so they will all be back at work at the same time again.

Developers tend to have lunch between 11:30 and 14:00 and if they have to get their own lunch, it’s not unlikely for them to just go out to the local supermarket themselves or to bring lunch from home. When they go shopping for lunch, they would be unavailable during that time. Of course, lunch time is their own time, but if you need them you don’t want to wait until they’re back from the supermarket. And another problem is that those employees will start storing food at work in their desk or wherever else they can store it. This could attract mice, and I don’t mean computer mice but those live, walking and eating animals.

If an employer provides the lunch and other snacks, this also means there’s a generic storage for food products. This storage is easier to keep up than the desks of developers. Besides, those developers now know their food requirements are satisfied during work hours thus they feel more comfortable.

The third motivation is sex. And here, employers have to be extra careful because this is a very sensitive subject. For example, a developer might spend some time on dating websites or even porn sites. Like social websites, a small distraction often helps during mental processes but a social website might take two minutes to read a post and then respond. A dating website will take way more time to process the profiles of possible dating partners. A porn site will also be distracting for too long and might put the developer in a wrong mood.

The situation at home might also be problematic. An employee might be dealing with a divorce which will impact their sex lives. It also puts them back into the world of dating and thus interfere in their nightlife a bit more. This is a time when they will be less productive, simply because they have too much of their personal lives on their minds. And not much can be done to help them because they need to find a way to stabilize their personal lives again. Do consider sending the employee to a proper counselor for help, though.

Single developers might be a good option, though. They are already dealing with a life of being single and thus will be less distracted by their dates. Still, if they’re young, their status of being single might change and when that happens, it can have impact on their jobs. But the impact might be even an improvement because their partner might actually force them to go to bed sooner, thus fulfilling the sleep motivation.

Married developers who also have children might be the best option since their family lives will require them to live a very regular life. The care for their children will force this regularity. But the well-being of those children might cause the occasional distractions too. For example, when a child gets sick, the developer needs someone to care for the child at home. And they might want to work at home a few days a week to take care of their children.

As an employer, you can’t deal with the sex lives of your employees at work. Those things are private. However, it can be helpful for employees if they can spend more time at home, in a private area, if they have certain needs in this regard. Allowing them to work at home would give them some more options. Since they don’t need to travel to work, they have more time available. If they decide to visit a dating site for half an hour, they could just work half an hour longer and no one would even know about it. If their child is sick, they can take care of them and still work too.

In conclusion, make sure your employees sleep well, give them free lunches and other snacks at the workplace and allow them to work at home for their personal needs. This all will help to make them more productive and allow them to improve themselves.

To Agile/Scrum or not?

The Internet is full buzzwords that are used to make things sound more colorful than they are. Today’s buzzword seems to be “Cloud solutions” and it sounded so new a few years ago that many people applied this term to whatever they’re doing, simply to be part of the new revolutions. Not realizing that the Cloud is nothing more than a subset of websites and web services. And web services are a subset of the thin client/server technologies of over a decade ago. (Cross-breeding Client/Server with the Web will do that.) It’s just how things evolve and once in a while, a new buzzword needs to be created and marketeers are now working on the next buzzword that should make clear the Cloud is obsolete. Simply because new products need to be sold.

Still, the Software Development World hasn’t been quiet either. In the past, a project would be completed through a bunch of steps. It would start with an idea that they would turn into a concept. And this concept would include all requirements for the project.  Designers would then be called to come up with some basic principles and additional planning. When they’re done, they start to implement things, which would include methods to integrate the project into existing products and basically writing all code. It would then be tested and once the tests are satisfying, the whole project could be deployed and the maintenance would start.

If the project had problems in one of these steps, they would often have to go back one step. (Or more, in rare occasions.) This principle is called the “Waterfall model” and it’s drawback is that every step could take weeks to finish. It generally means that you can only update twice per year. Not very popular, these days.

So, new ideas were needed to make it possible to create updates more often. It started with the Agile Manifesto in 2001 and it has become a very popular method these days. Most groups of developers will have heard about it and have started implementing its principles. Well, more or less…

Agile has just four basic rules to keep in mind:

Individuals and interactions over processes and tools.
Working software over comprehensive documentation.
Customer collaboration over contract negotiation.
Responding to change over following a plan.

That’s basically the whole idea. And it sounds so simple since it makes clear what is important in the whole process. Agile focuses a lot on teamwork and tries to keep every team member involved in the whole process. Make sure every member is comfortable with the whole process and basically, talk a lot with one another over the whole process. People tend to forget it, but communication is a key element between people.

Of course, whatever you publish should work, and work well enough so users don’t complain about crashing applications or lost data. You might be missing features that customers would like, but that should not be the main focus of the whole process. Keep it working and keep the customer happy.

Of course, since you’re dealing with customers, you will need to know what they actually want. It’s fine if the CEO decided that the project needs methods X and Y to be implemented but if all customers tell you they want methods A or B implemented, then either the CEO has to change his mind or the company should start looking for a new CEO.

And keep in minds that things change, and sometimes change real fast. It’s hard to predict what next year will bring us, even online. Development systems get new updates, new plug-ins and new possibilities and you need to keep up to be able to get the most out of the tools available.

So, where do things go wrong?

Well, companies tend to violate these principles quite easily. And I’ve seen enough projects fail because of this, causing major damage or even bankrupt companies simply because the company failed at Agile. Failure can be devastating with Agile, since you’re developing at high speeds. And we all know, the faster you go, the harder you can fall…

Most problems with Agile starts with management. Especially the older managers tend to live in the past or don’t understand the whole process. Many Scrum Sprints are disrupted because management needs one or more developers from that sprint for some other task. I’ve seen sprints being disrupted because a main programmer was also responsible for maintaining a couple of web servers and during the sprint, one of those servers broke down. Since fixing it had priority, his tasks for that sprint could not be finished in time and unfortunately, other tasks depended on this task being ready.

Of course, the solution would be that another team member took over this task, but it did not fit the process that the company had set up. This task was for a major component that was under control by just one developer. Thus, he could not be replaced because it disturbed the process. (Because another developer might have slightly different ideas about doing some implementations.)

Fortunately, this only meant a delay of a few weeks and we had plenty of time before we needed to publish the new product. We’d just have to hurry a bit more…

Agile also tends to fail when teams don’t work well together. Another company had several teams all working on the same project. And unfortunately, the project wasn’t nicely divided in pieces so each team had its own part. No, all teams worked on all the code, all the pieces. And this, of course, spells trouble.

When you have multiple teams working on the same code, you will often need an extra step of merging code. This is not a problem is one team worked on part A and the other on part B. It does become a problem when both teams worked on part C and they wrote code that overlaps one another. Things will go fine when you test just the code of one team but after the merge, you need to test it all over again, thus the whole process gets delayed by one more sprint just to test the merged code. And it still leaves a lot of chances for including bugs that will be ignored during testing. Especially manual testing, when the tester has tested process X a dozen of times already for both teams and now has to test it again for the merged code. They might decide to just skip it, since they’ve seen it work dozens of times before so what could go wrong?

As it turns out, each team would do its own merging of the code with the main branch. Then they would build the main branch and tell the testers. Thus, while testers would be busy to test the main branch that team 1 provided, team 2 is also merging and will tell them again, a few days later. The result is basically that all tests have to be done over again so days of testing wasted. Team 3 would follow after this, thus again wasting days of testing. Team one then decides to include a small bugfix and again, testing will have to start from the beginning, all over again.

With automated testing, this is not a problem. You would have thousands of tests that should pass and after the update to the main branch, those tests would start running from begin to end. Computers don’t complain. However, some tests are done manually and the people who execute those tests will be really annoyed if they have to do the same test over and over with every new build. It would be better if they’d just try to automate their manual tests but that doesn’t always happen. So, occasionally they decide that they’ve tested part X often enough and it never failed so why should it fail the next time?

Well, because team 1 and team 2 wrote code that conflicts with one another and that code is in part X. The testers skip it, thus the customer will notice the bug. Painful!…

There are, of course, more problems. I’ve seen a small company that had a nice, exclusive contract with a very big company. Lets call them company Small and company Big. Company Small had created a product that company Big really liked so they asked for an exclusive version of it, with features that company Big would choose. And this would be a contract that would be worth tens of millions for company Small and its ten employees.

And things would have gone fine if company Small had not decided to continue working on its own products and just focused on delivering what company Big wanted, and to deliver in time. But no, other things were more important and the customer would just get what company Small made, with some minor adjustments. And the CEO was quite happy with this progress. That is, until the customer noticed that they did not hear his wishes. All company Big was supposed to do was sign the contract and pay the bill. And once things were done, they would just have to accept what was given to them. So company Big found another company willing to do the same project and just dumped company Small. End of contract and thus end of income, since company Small just worked exclusively for the bigger company. And within five months, company Small went tits-up, bankrupt. Why? Because they did not listen to the customer, they did not keep them happy.

And another problem is the fact that companies respond very slowly on changes. I’ve worked for companies that used development tools that were 5 years old, simply because they did not want to upgrade. I still see the occasional job offering where companies ask for developers skilled with Visual Studio 2008 while there are three newer versions available already. (Versions 2010, 2012 and 2013.) In 2003 I was still working on a 16-bit project that was meant to be used by Windows 3.1 and up, simply because one single user still used an old Windows 3.11 system. At least, we thought they did because no one ever asked them if they’ve upgraded. And that customer never told us that they had indeed upgraded and didn’t think of asking for a 32-bit version…

I’ve seen management hang on to a certain solution even though there’s plenty of evidence that newer options are available. I’ve developed software on 32-bit systems with 2 GB of memory when 64-bit systems were available and had up to 8 GB of memory, plus more speed. I had to use a single-monitor system on a PC that had options for multiple monitors plus we had extra monitors available, but management considered it a waste. The world is changing and many systems now easily support two or more monitors but some companies don’t want to follow.

So, what is Agile anyways? It’s a method to quickly respond to changes and desires of customers with a well-informed team that feels committed to the task and to deliver something the customer wants. (And customers want something they can use and which works…)

Would there be a reason not to use Agile? Actually, yes. It’s not a silver bullet or golden axe that you can use to solve anything. It’s a mindset that everyone in the team should follow. One single member in the team can disrupt the whole process. One manager who is still used to “the old ways” can devastate whole sprints. When Agile fails, it can fail quite hard. And if you lack the reserves, failure at Agile can break your company.

Agile also works better for larger projects, with reasonable big teams. A small project with one team of three members is actually too small to fully implement the Agile way of working, although it can use some parts of it. Such a small team tends to make planning a bit more difficult, especially if team members aren’t always available for the daily scrum meetings. When you’re that small, it’s just better to meet when everyone is available and discuss the next steps. No clear deadlines, since the planning is too complex. What matters is that goals are set and an estimation is made when it is finished. Whenever the team meets, they can then decide if the estimation is still correct or if it needs to be adjusted.

Another problem can be the specialists that are part of the team. Say, for example, that you have a PHP project that needs to communicate with a mainframe and some code written in COBOL. The team might have hundreds of PGP developers but chances are that none of them know anything about COBOL. So you need to have a COBOL specialist. And basically he alone would carry the tasks of maintaining the mainframe side of the project. You can make him part of the Scrum meetings but since he has to do his part all by himself, he doesn’t have much use for the other team members. So again, just decide on a specific goal and estimate when it should be finished. Get regular updates to allow adjustments and let the COBOL developer do his work.

The specialist can become even more troublesome if you have to interact with a project that another company is creating. If you do things correctly, you and the other company would discuss a generic interface for the interaction between both projects. You would then both build a stub for the other company to use for testing. This stub just has to offer some dummy information, but it should be usable.

When both companies have the stubs they need, they can each work on their part. They will have to keep each other informed if some parts of the interface need to be changed or if some rules are changed about the data that can be provided. Preferably, this is done by providing a new stub. Both teams will have just one goal, which is providing all the required methods that are part of the stubs. And when parts are fully implemented, they can offer the other company with new stubs that contain some working parts already.

Still, when two companies have to work together this way, they have to think small. Don’t create a stub with thousands of methods for all the things you want to add during the next 5 years. Start small. Just add things to the stub that you want to finish for the next sprint. Repeat adding things per sprint and communicate with the other company about what they’re going to add next. You don’t have to work on the same method of the stubs anyways. One company might start working on the GUI part that allows users to enter name, address and phone number while the other works on storing employment data and import/export management. The stubs should just give dummy methods for those parts that aren’t implemented yet. Each company should develop the parts that they consider the most important, although both should be aware that everything is finished only if all stub methods are implemented.

Agile is just a mindset. If used properly, it can be very powerful. However, do keep in mind that not all of Agile might be practical for your own situation. Agile requires a lot of time for meetings with developers, with customers and with management. Everyone needs to be involved and everyone needs to be available for those meetings. Scrum becomes more difficult if not all team workers are available on all five workdays of the week. And worse of all,, team members will have to prepare for the meetings. Even for the daily meetings since they have to keep track of their own progress.

Do not fear to just implement part of the whole Agile/Scrum principle. It is made to hybridise with other methods. Use the methods, don’t let the method force itself upon you.

Let’s talk about social media…

When I was a kid, there just wasn’t any internet. If you wanted to speak with someone else, you’d had to pick up the phone or just go visit them. Being social was complex because it involved plenty of travel to meet others. And even when the Internet was born, being social was still something that people did in real life, not behind a computer screen. Still, things slowly changed about 15 years ago, when people started to use the Internet for all kinds of fun things. It also helped that proper internet tools became more popular. (And free!) The increased speed and the change from the 33k6 modem to ADSL or Cable also helped a lot. And now, just one generation further, being social is something we do online, with bits and bytes.

But enough history. And no, I won’t explain what social media are because now, you’re reading stuff I wrote on such a social media website. (Yeah, a Hosted WordPress site, but I could have used Blogger or Tumblr too..) This discussion is about the complexity of all those social media, not their history.

Most people will be familiar with both Twitter and Facebook. On Twitter you post a message that you’ve just pooped and on Facebook you post the picture of the result. And if you’re a professional, you might also post it on LinkedIn, if you’ve pooped during office hours. Since you can connect these three together, you will start to build a practical resource with all kinds of personal information about you online. Twitter will be used to send small but important updates about yourself, your company or your products to every subscriber while Facebook is practical to connect with the consumers at home. But if you’re looking for a new job or need to hire or find some experts, you use LinkedIn for your search.

Search? That reminds me. There’s also Google Plus although not many people use it as a social platform. Still, people like it because you can use your Google Plus account to log in many other websites. (Facebook, LinkedIn and Twitter also support this.) Google also provides email accounts and document management tools, plus plenty of online storage, so it’s a very attractive site to use, even if people still are less social on Google Plus than they are elsewhere.

Yahoo also used to be a great social media center, but the competition with other sites has lessened its influence considerably. Many things that Yahoo offers is also available on other sites. Yahoo also used to be great with their email services until they decided to drop support for email through POP/SMTP, just when Google decided to start increasing their email services. By doing so, Yahoo lost much if it’s influence and never really managed to get some back, although their photo-service Flickr still holds plenty of value. (But here too, the competition becomes murderous.)

Pinterest, for example, can also be used to share photo’s with others, although Pinterest is mostly used to share pictures from others, to promote those people. Basically, it’s a site for fans. DeviantArt is a bigger challenge for Flickr and has a huge amount of graphics. Especially cartoons and CGI next to pictures. But DeviantArt is missing an easy way to connect your other social media to your DeviantArt account.

So Behance is another interesting photo site where you can build your gallery and, more importantly, allow people to contact you and offer you jobs and other career opportunities. It also connects better with other social media and if it was free, it would definitely kill Flickr. Unfortunately, the free version has limitations and the commercial version is a bit expensive if you just want to share a bit of your work. Or maybe you’d prefer Bitpine.

Then again, if you’re into the art of images and photo’s, you might like to try to make some profit by selling merchandise. Cafepress is known for this and allows you to upload pictures and put it on all kinds of things, including the cape for your dog or panties for your girlfriend. There are plenty of other sites that allow simpler merchandise like t-shirts but Cafepress just has a huge collection of things you don’t need but which still look nice with your picture on them.

There are more social media sites, of course. Including sites that will combine all your social media sites into a single reference for all your friends to know where you hang around. About.me will combine your bio, your résumé and all kinds of social media connections. Mine tends to have plenty of connections. Connect.me is also practical to connect with other people and allows you to build up your online reputation. TrustCloud is another medium that links people you know to your account. (Or mine.) Or go to Visify and tell others how active you are online.

An oldie is Reddit which is more like an online forum. However, it has so many users that all discussions go very fast. Vimeo can be used to share videos, just like YouTube. Or use GitHub if you’re a software developer and want to share your code with others. Or Society3 for those who need social media for their marketing strategies. Or, the simples one: FourSquare, where you can tell where you are and where you went.

Well, I’ve mentioned plenty of social media sites and it’s all great to share your personal information with the World and get your 15 minutes of fame. And they all connect to one another, often via ID providers from Google, Facebook, Twitter or LinkedIn and lately also from Adobe. (Although Adobe is mostly using its ID provider to have others connect to the Creative Cloud.) If you’re connected to even a third of these sites, then there’s a lot of information about you online. And this is where it starts to become creepy and dangerous.

First of all, the amount of personal information that people share is huge. The joke I started with that people tell others on Twitter that they’ve just pooped isn’t just a joke. It happens! But when people are on a holiday, they also tend to use Twitter, FourSquare and TwitPic to tell the World where they are. With more information on Facebook, thieves can try to find where those people live and rob those empty homes. They might also check LinkedIn to see if someone might have some interesting stuff at home. For example, a CEO of a company who’s on holiday in Italy is a more interesting target than a teacher visiting his aunt in Almelo. And this is just a few different media that can be abused by others without the need to hack anything.

So beware of your privacy and avoid sharing sensitive information online. Or at least be less interesting than the other online people.

But getting robbed is just one risk. You can protect your home, make sure there’s at least one person there when you’re on holiday. The problem is that all these media are connected to one another. And in general, you have given them permission to combine their information. And systems are as strong as their weakest links.

Take, for example, Facebook. Many websites use your Facebook ID to let you log in to those websites. Thus, if someone hacks your Facebook account, they also have access to those other websites. And if one of those sites has your credit card information, your bank account information or your PayPal information. They might not even need this information to make purchases in your name, simply because those connected sites remember this internally. I checked which all I use that are connected to Facebook and it turns out that I’m connected with over a hundred other websites! I know a few friends of mine have an average of around 40 other sites connected to their Facebook account and it’s easy to increase that number since plenty of sites want to connect to Facebook.

Fortunately, I have created several websites that connect to Facebook so several of those connected apps are actually my own sites. Still, it’s a lot. It means that you have to be aware that anyone who hacks my Facebook account will be able to use these other sites. What they can do on those sites depends on how those other sites have implemented their security. And the same applies with apps connected to Google Plus, Twitter or LinkedIn.

If you use Flickr or Yahoo then you might have connected that account with Facebook or Google Plus. Since Yahoo is used as ID provider for even more websites, you can see a complete chain fall down once your Facebook account is taken over. This makes Yahoo less reliable than the others. With Facebook, Twitter, LinkedIn and Google you can try to add more security. For example, only copy the ID key from the provider plus the email address and force the user to generate a new password for your site. Thus, if Facebook is hacked, they still need a password for your site.

Which causes another problem. When people have a few dozens of social media accounts, they start having troubles remembering all the passwords. I use an email alias for every site. Websites tend to allow visitors to log in with email address and password so I can use the same password for many sites, because the user email address is different for every site. (I still use different passwords too, though.) Most people just use the same address and password for many sites, though. And that’s a big risk, because if one of the sites is hacked, the hackers will be able to use that information for all the other sites.

The bigger websites do have a proper security. At least, that’s what most people think. However, both Adobe and LinkedIn have had some serious trouble with their user databases and users of both websites have received a notice in the past urging them to change their password immediately, because of the hacks. And these were just the bigger sites who dared to publicly admit they’ve been hacked. Smaller social media sites can be a bigger risk if their security isn’t strong enough. Which is why it’s actually better that they use ID providers from the bigger sites instead of implementing their own systems.

Developers often ignore security, thinking that what they’re making isn’t very interesting for hackers. But I can’t say it often enough and remind people that social media are just chained together. One weak link exposes all.

When you want to build your own social media website then be very aware of the security. Don’t build your own version unless you have an expert in your team. And even then have the code audited by another expert. Since social media chain together, a weak link in this chain will take it all down. Which reminds me of this xkcd comic:

xkcd

When you create your own ID provider, you’re just adding to the competing standards that already exist. What would make your system better than those others? Your site will be more secure by using an existing provider but if that provider has a weakness, your site will fall too unless you require more information.

My suggestion would be that people should be able to log in using Google Plus, Facebook, Twitter or LinkedIn but combine it with some extra security. You know, for example, the IP address from the visitor thus you can remember it. As long as it’s the same as in your history, it’s unlikely that the account is hacked. Once it changes, you should ask for one more extra piece of information like a separate password. The visitor should know this, since he might have had to enter it during registration.

Another option would be by asking the visitor for his mobile phone number during registration so you can send an SMS message as part of the authentication process. Thus, if a user is using a different computer, you can send an SMS with a security code. The user will have to enter that code and then you know you can trust that system. Add it to the list of trusted computers for this user and you can keep the visitor safe. (Microsoft is doing something like this with Windows Live.)

So, a long story just to start a discussion about the best way to secure social media, reminding everyone that there are actually a lot of sites chained together through all of this.

Dealing with deadlines…

I’ve worked on many projects and all of them had deadlines. And like any other developer, I consider deadlines very annoying as they get closer and closer, forcing me to work more and procrastinate less. The result tends to be an uneven workload, since things are reasonable quiet when the deadline is in two months and extremely busy when the deadline is at the end of this week, and it’s already Wednesday. Deadlines can be especially nasty if someone estimated a task to take three weeks while it turns out to be two months worth of work. Or worse, it’s two weeks of work, but other tasks in the same period also expand to two weeks work. Thus, if you have 5 tasks that each take two weeks, and you have three weeks in total, then you’re doomed before you can even start…
But while deadlines are Evil, we just can’t work without them. At least, as long as we want to receive our paychecks, we can’t do without them. Why? Because to create a project or to upgrade an existing one, an X amount of money is reserved to cover all development costs. The final deadline is calculated based on the amount of money that is invested in the project, minus the amounts of money that those paychecks will cost. (Plus many other costs…) Once this deadline swooshes by, the product will have to generate revenue so new projects can be started. Else, the end of the deadline will probably mean your paychecks will stop too. So, it’s important to finish within those deadlines.
I can’t help thinking about the Cathedral and the Bazaar, an essay describing the differences between open-source and closed-source. It fits the area of deadlines too, since bazaars are built by people who feel inspired to build just a small piece of something large. And when they leave, others can take over. As a result, thousands of people can work on building the Bazaar and while the final result might be chaotic because of all the style and color differences, it’s also something that’s build quickly and without any deadlines, simply because others will fill in the spaces of those who don’t make it in time. Then again, many people working on those bazaars won’t get a paycheck, just some recognition of being part of a larger community.

Building a Cathedral, however, is a very long process which used to take decades or even longer. Things had to be carefully planned and everything needed to be finished in time, because other parts need to be built on top of the first parts. Not making the deadline often means it would take longer for the cathedral to be finally finished. Fortunately, most Cathedrals had near-infinite funds because people knew it would take decades to finish even before they started building. Thus, they would find investors to start things up by donating money in return for promises in the afterlife. Which makes a very wonderful sales argument, by the way. Besides, if for whatever reason the construction of a Cathedral could not continue, people would change the build plans or re-use whatever had been built for something else. Here, deadlines matter but because the financial resources were almost infinite, there was never a real, final deadline.

Unfortunately, software developers generally want to be paid and don’t have infinite resources. Thus, we have to deal with final deadlines far more often. Which is why development methods have been created to make sure that there’s at least something finished at the end of a deadline.

When I was young, I’ve learned a technique called SDM, which is based on the seven stages of action. This method is often referred to as a waterfall method and is often considered outdated because people today expect software development to be “rapid”. In SDM, each stage could take a few weeks to finish and only in the 5th stage you would have some real code that would do something. Then the 6th stage would start all kinds of tests and if those tests failed, you would have to go back to stage 5. And if there was a design flaw, you might even have to go back to stages 4 or 3. Thus, it could easily take months before a company would see some results.

A modern approach is called Agile and basically it’s different from the old-fashioned waterfall technique because now you’re dealing with dozens of small waterfalls instead of one big one. And every waterfall has its own final deadline. A moment when you have to stop working on it simply because you’re out of resources. Unfortunately, if agile methods aren’t implemented correctly, they tend to fail quite hard and you will miss plenty of deadlines. This is mainly because these methods are created to generate results fast, even though the results themselves are small.

When done correctly, Agile will start to generate a very small project that has almost no functionality and isn’t much to look at. As time passes, more and more functionality will be added, which is possible because customers start paying for the product. (Or other forms of income are generated.) These customers will make extra demands and by using agile methods, the developers make the product comply to those demands within a reasonable amount of time.

But Agile will go wrong if developers start writing code too early or when management fail to estimate how long certain tasks will take. Worse, unlike the old SDM Waterfall method, the developers have no idea what the product should look like in two years. With SDM, they will. It’s just that with SDM you won’t have a product before this time, while agile methods will allow you to get customers involved in an early stage.

Agile methods also have another advantage. You are allowed to miss some of the deadlines, which just means some functionality won’t be implemented. So your project will e.g. miss functionality to export to Excel. It’s not that big of a deal, since you can always decide to try again after some time. But before you can do that, you will have to build up your resources again and analyze what went wrong. With the SDM methods, you might discover that exporting to Excel isn’t possible for whatever reason, thus you might have to go back a couple of stages to redesign this part. And going back means the whole project will be delayed longer.

So, let’s look at several scenarios with deadlines in them…

The deadline was yesterday.

Well, too bad. The project has failed. If you used SDM then start looking for another job since your company will most likely run out of its resources. Of course, there’s still a chance that they find some more investors thus the deadline might get extended. If that’s the case, the deadline just wasn’t final.

If you used agile methods then your product will be missing a feature. This is less costly and maybe you can keep your job but this is also the moment when people have to analyze what went wrong. Too much procrastination? Bad management? Bad planning? Or just too many surprises and unexpected events?

I have seen how a scrum sprint of three weeks contained work for 4 developers. Each of them would have to work at least 32 hours per week. Unfortunately, it was planned during the holiday season around Christmas, and two of the developers had taken two weeks off. One three weeks and the last one would be available most of the time. Management knew about this all months before the sprint would start so it was already doomed before it started. To make it worse, the developers would still try to get a lot of work done, thus coding started without much thought of any logical design and the code just became a bigger mess than it was before. Bad planning because of bad management that results in bad code. This can haunt the future of the project since the next sprints are unlikely to contain tasks to fix the problems that occurred during this period.

When you’re not finished at the end of the deadline, it’s important to analyze why it failed and if there’s anything usable produced which could help to redo this sprint. The code needs to be frozen and put on a sidetrack (a separate branch in your source control system) because the next sprints will have to be done. In the worst case, the other sprints are depending on the thing you were supposed to build, thus you must restart it all over again, causing an extra delay for the final delivery of the product.

The deadline is at the end of this week, and its Wednesday.

If you’re using SDM methodology, this will mean a certain doom, unless the product is finished and the testers can test it within the remaining days and don’t discover any bug. I have better luck with the horse races when I bet on the three-legged horse, but okay… It could happen. You can also try to deliver an untested product to your customers, which happens often enough. It’s a gamble but it might give you a chance to get more resources which will allow you to fix anything your customers find. Then again, if the bugs are really nasty, customers might claim their money back and might even sue for damages caused by your product.

When you’re using agile methodology then this means this sprint will be the last one so you should start working on fixing any major bugs and forget about adding new features. Disable and hide anything that isn’t implemented and if you can’t fix certain bugs in the last days, consider the possibility of hiding the options that cause the bug. Your customers are waiting and you’re now in damage control mode.

However, if you used the agile methods correctly, most of the features that are supposed to be in the product are available. Most bugs have been fixed already as part of earlier sprints. Most functionality should be available, even if you were forced to skip a few sprints. Just remember that this is not the moment to add some new functions. Quite the opposite! This is the moment to disable all that’s not working!

The deadline is over two weeks.

If you use SDM methods then you should be in the last stage, which is called ‘Implementation’. Basically, this is the final test phase of the whole project and things should work just fine. If bugs are encountered, they should be small and you should just test to see if the project is doing what it’s expected to do. Minor bugs can still be fixed or even ignored but any changes of the code should have a very minor impact. If you do find severe bugs, you will have to go back one stage, which will cause you to see the deadline pass by before you have a final product. But you will have a chance to fix those bugs and deliver the product with those fixes untested. Hopefully, those fixes didn’t cause new bugs. If they do, angry customers will tell you about them!

Agile methodology will have sprints of two or three weeks so this is your final sprint. You should not be adding new features at this moment because they might add new bugs. Those new bugs are normally fixed during the next sprint, but this one happens to be the last. The final sprint is better used to fix the most serious problems and have them tested so you know the final product will be okay to deliver to your customers.

All projects will have bugs so don’t be afraid if your product has a few. The deadline is there to show when you run out of resources and by delivering the product, you can gain some new resources. If you used agile methods correctly, most bugs will be minor and you will have added plenty of new functionality to please your customers.

The deadline is the next month.

Using SDM methods, this means that you will move from ‘Realization’ towards ‘Implementation’ thus if some features are still missing, then you’d better consider if those are really needed. This time will tell you if the design that you created during the first stages are good enough for the final product. Worst-case scenario? A design flaw, causing you to go back at least 3 or more stages. Still, at this moment you can consider moving to agile methods and fix things within one or two sprints.
When you’re using agile, you can consider adding a few minor functions to the product and you should start testing in an environment that resembles that of your customers. Don’t try to come up with new things and keep the developers available so they can quickly fix things before the final deadline.

I’ve seen the error of companies who decided to let developers work on the next sprint which would be part of the next version of the product. This is not a good idea because if something goes seriously wrong in the current development versions, those developers will have to switch back to the code base from before the new development. Worse, that new sprint will most likely fail too because those developers can’t work on it. Do keep in mind that resources will stop when you’re passed the deadline and your product isn’t ready to be delivered.

Of course, many companies will have some reserves that will allow them to delay things a bit more, but customers will be unhappy about this, too. They expected a product at the end of the month and now learn that they might have to wait another two to four weeks, at least.

The deadline is within a half-year.

Well, with this much time you can start making careful planning and do a few designs and exchange ideas before you start developing. If you’re starting something new then it might even be a good idea to use SDM methodologies with a deadline set at three months in the future. While it’s a single waterfall, it’s very likely that you want to start the project with a good base of functions. It’s no use if you have a product within a month that allows users to just enter a bunch of data without any other functionality, and with a crappy GUI. Besides, if you do manage to create a working product from scratch within two scrum sprints, then what’s the difference with the old-fashioned waterfall way anyways? Okay, you’ve had two smaller waterfalls. Most likely it won’t be enough to appease your customers because it’s a product that still leaves a lot to desire.

Still, agile methods would also work fine, because those customers can start adding their desires to the final design. You would have to start small and have a way for your customers to offer lots of comments. It won’t really be your product because your customers will dictate some of its functionality. Then again, that’s how agile methods work. They offer customers an early peek view and allows them to become part of the process.

Which also tends to cause problems with agile, because an important customer might ask for functionality that’s complex and takes long to deliver. To keep this customer, you would have to add this, but your other customers might prefer that you start with other features first. Management will have to read through all those demands and will have to decide which ones can be done before the final deadline, and which ones are for the next version.

Do keep in mind that I don’t mean the end of a sprint with ‘final deadline’. The final deadline is when you will have to deliver something to your customers because you’re running out of resources.

With SDM, you can still show a design to your customers and ask if they have any more comments. You can continue with these design stages until you reach the ‘Realization’ part. At that point, you should have enough information about what you want to build, what the project should look like, what your customers are expecting and you will be able to divide the remaining work into short sprints for the next three months. Up until that moment, you would have more need of your designers while your developers can just be procrastinating or whatever. But once you start to realize things, using sprints will at least make sure that you will reach part of your final goals.

Agile tends to fail also because of a lack of vision. You can start with a small project, then listen to the desires of your customers and add more functionality. But sometimes you want to create something big, like a CRM product for supermarkets. Sure, you could start small with a simple CRM product, but then it’s likely that those big customers aren’t interested and you’ll end up with lots of small fishes with lots of small desires. Still, plenty of small fish will offer enough resources, as long as you can keep them nibbling on your bait.

The King, the Queen and the Messengers…

Long ago there was a palace with two towers. In the West tower, the King had his office where he would meet with his Ministers and other staff to rule the country. In the East tower, the Queen had her office from where she would regulate the Palace staff, take care of the Royal Heirs and to take care of all social State Affairs. A hallway connects the two towers and has some side rooms for e.g. the Prime Minister and other important State people and guests to reside.

This will be a discussion about messaging systems in the computer world, using a monarchy as an example. The King would be a main process on a single machine, either virtual or real. The Queen is another process, most likely running on a second machine and meant to keep the main system healthy. (The heir would replace the King if something would happen to him.) The other rooms would be other processes, most likely on other computers that the King or Queen would need to get data for their decisions. And the hallways are the network, connecting all these processes.

If the King and Queen and all Ministers and guests would be in the same room, communication between them would be very simple. But when you take into account the size of their staff and have to fit it all in a single room, you’ll soon discover that it won’t fit. The same applies with having all business logic within a single process or on a single computer. The amount of memory or disk space is limited on computers and you’re risking that you’ll run out of space soon enough. You would have to reduce the size of the processes, meaning getting rid of some ministers and guests or (in computer terms) data. But it would be more practical to divide it all over more rooms, which will allow the staff to expand more, thus contain more data.

So the royalty divides itself over several rooms in the Palace, with the King and Queen having the best and most practical rooms: the towers. This does make communication between them more difficult. Still, if the King wants to tell something to the Queen, he could go down his tower, walk through the hallway, climb up the Queen’s tower and then tell her what he wanted to say. This trip could take him 15 minutes and once he’s done talking, he will have to go back again. Thus the King can’t do his own work during at least 30 minutes.

In computer terms, this is equal to synchronous communications. And to be honest, it was quickly decided that this would never work. You can’t move a process in the context of another process on a different machine, let them work together and then bring the process back again. (Well, you could, but it’s just too complex.) So this has been solved by the use of messengers. Or, in TCP/IP terms: messages.

So, the King would write what he wants to say and puts the letter in an envelope. He calls the messenger and gives him the envelope. The messenger then runs to the Queen and gives her the message. The Queen then starts to write an answer, put it in an envelope and will use the same messenger to send it to the King again. Meanwhile, the king knows the trip will take 30 minutes and calculates the Queen will need 10 minutes to write the answer. Thus he will wait 40 minutes for the messenger to return before he can send out the next message. If the time is up, he will have to assume there’s an assassin in the hallway who killed his messenger and will just call in a new messenger.

When the messenger returns, the King picks up a gun and kills the messenger! He doesn’t need any messenger until his next message so why waste food or attention to this messenger? The body can go down the garbage chute and the garbage collection will deal with it.

This is a basis for the synchronous communication which you will see in many Internet-based applications. It’s not really synchronous, since the process can still continue to do other tasks that don’t need anything that will be in the answer from the Queen. Basically, a thread is started to send a message to the other process and then wait for a reply. If the thread returns with an answer, the thread can be closed, destroyed and will be picked up by the garbage collector to free resources. If it does not return in time, the process will have to assume something happened to the message, thus an alert should be raised and actions will have to be taken to discover why the message failed. These discussions can include checking the network for errors or just sending the message again with a new messenger.

It’s not really a synchronous process, since the King can decide to do other things, but the King can send only one messenger at once, so he will have to wait before he can send another message. Still, killing the messenger seems to be a bit “troublesome”.

So the King decides to have a collection of messengers ready to send messages. He knows he needs about 4 minutes to write each message while the Queen needs about 10 minutes. Thus he needs at least 10 messengers to continuously send messages to the Queen. When the messengers return, he can put them back in his collection, thus re-using his resources instead of wasting them.

In computer terms, this translates to a thread pool. Simply put, the system creates an area where it can keep threads that it don’t need for now, so it doesn’t have to spend time on creation or destruction of these threads. The process is still synchronous, since the King expects messengers to return. And if messengers are delayed, he might run out of messengers so he would need a few more, just in case. He can always order one more messenger, but it takes some time to assign this job to some random citizen who happens to be in the Palace at that moment. It is effective, since it allows the King to continue working and to continue sending messages to the Queen. He only gets in trouble when he needs an answer from the Queen before he can continue working. But while he hasn’t received the answer, he could still work on other tasks.

But what if the Queen also wants to say things to the King? Does she wait until the King sends her a message and add her sayings to the response? That would not be practical, since she never knows when a new messenger will arrive. What if the King doesn’t have anything to say? So the Queen will also have her share of messengers, just like the Ministers and the guests. Thus, they can all send messages when they like and wait for the responses. And how much messages they need depends on the distance to the receiver and the amount of time the receiver needs to write the answer.

For example, the Prime Minister might have a room next to the West tower. Thus, his messengers would only need to travel 5 minutes to the king and then wait for 4 minutes, since the king writes really fast. If the Prime Minister needs about 7 minutes to write his messages, then he just needs 3 messengers to be able to continuously send messages to the King. But if he also wants to communicate with the Queen, he will need more messengers since that distance is longer and the Queen writes slower.

In all these cases, we’re dealing with a request/response system, which is basically what most websites are. The browser sends a request to the server and waits a specific amount of time for a response. If a response doesn’t return in the proper amount of time, the user will be told that there’s a technical problem. There’s no way of knowing if the request failed to arrive or if the response failed. What, for example, if the Queen was assassinated? Dead Queens can’t reply. And a new Queen would be needed to get all operations running smooth again. But making a new Queen takes time. And in the Real world, the King would have to approve the new Queen, but fortunately on the Internet, the user would never notice the replacement of the web server, except for the temporary time during which they won’t get responses. Kings might be mourning the loss of their Queen, but visitors of your website won’t care if you’ve replaced your server, as long as they get their responses…

You could, of course, save downtime by having a second Queen prepared and ready when the first one dies. Basically, you would build a second East tower and have a sign telling the messenger which tower he should take to visit the Queen. You could even have two Queens at the same time, letting messengers decide by random which Queen they will visit. Unfortunately, since the Queens don’t know what the other Queen has said or responded, the King might receive some strange messages. Thus the Queens would need more messengers to send carbon copies of every message sent or received to the other Queen so they both know what’s going on.

Still, in this system we wait for responses after every message. What if we decide to stop waiting?

So, we start anew. The West tower holds the King with a collection of messengers. A bunch of citizens are in the area, who can all be promoted to messengers if there’s a need for more messengers. At the east side, we now have two towers, each holding a Queen and each holding a collection of messengers and more civilians who can be promoted. The same is true for the Ministers and the guests.

So, the King writes a message and tells a messenger to deliver it to the Queen. The King then starts working on other tasks, delaying those for which he needs an answer from the Queen. He basically forgets about the messenger and won’t wait. He just calls for another citizen to be promoted to messenger The messenger decides to take the left entrance for the East towers and gives his message to the Queen. Then he’s downgraded to citizen again, free to walk around. Most likely his proximity to the Queen means he’ll be promoted to messenger real soon again.

The Queen writes her message in reply to the King and gives it to her messenger. Her messenger is on its way and a citizen is promoted to messenger while the Queen continues her work again. When the messenger arrives at the West tower, the King remembers the task again that he had delayed. Since he now has the data he needs, he can continue this task again as long as no other task has a higher priority.

This is a pure asynchronous process. It’s advantage is that you don’t have to wait anymore. All you do is delay tasks until the moment that you can actually do something again for this task, which generally occurs once you have a response. until then, the King just can say that he can’t finish the task, since he doesn’t have data.

Of course, there are drawbacks. The King will never know if his message arrived. Does he need to? If he just wrote “I love you.” to the Queen, does he even need a response? Yeah, it would be nice if she responded with “I hate you and only married you for your money.” but it’s just optional. (And no, this is no love story!) The same is true for messages in the opposite direction. The Queen might send messages to keep the King up-to-date without expecting even a thank-you note. An asynchronous system is ideal in situations where you’re not really interested in responses, just regular status updates. Status updates are needed just to know the other sides are still alive.

But let’s make things more interesting… The King decides to have multiple Queens and marries about 40 different women. Every Queen gets a tower on the east side. This way, the King can produce plenty of heirs and knows that the social parts of his court are well-tended for. He will still send messages to the Queen, but the messenger can just pick one at random. If one Queen is unavailable, the messenger just goes to the next one.

This is a practical advantage of asynchronous messages. You don’t care who picks up the message, as long as it’s picked up. You’re not waiting for a reply either. If the Queens are supposed to reply in some way, it doesn’t even matter which Queen replies, as long as there’s a reply. This would be a system where a service is replicated on several different machines in several different processes. It allows the system to continue working without the King even noticing that half of his Queens are assassinated… Or if half the machines are down for an upgrade.

The drawback is, of course, that the Queens don’t know anything about the messages the others have sent or received. This might be troublesome, since they might receive a message from the King as part of a longer discussion, but they just don’t know what the discussion is about or what has been discussed before. So, every Queen will have to make carbon copies of every message, sent or received, and forward these to all other Queens. With 40 Queens, this would make the hallways between the East towers very busy with lots of messengers.

Another solution would be if every Queen would send a carbon copy to a “Mother-Queen”, a Queen to rule all other Queens. This way, the Mother-Queen would know about all discussions and there won’t be 40 messengers needed for every message to the Queens. Now you need only two. One that the King has sent and one the Queen sends to the Mother-Queen.

The Mother-Queen will collect all incoming copies and once in a while send these to the other Queens as updates. This way, each messenger would carry dozens of messages every time to keep the other Queens updated. It does increase the risk that a Queen receives a message from the King without knowing the previous discussion but all she would have to do is wait for the Mother-Queen to send the update so she will know what the discussion is about. besides, the King isn’t expecting an immediate answer so any reply could wait.

Something similar can happen in an asynchronous system. A message could be received by a listener that’s part of a longer discussion. The listener would then just have to wait until it received the previous parts of the discussion. Since asynchronous systems don’t reply, the tasks that are needed to be executed in response to these messages can be delayed.

So, how does this translate into computer processes? Say, for example, that every Queen is a process containing a copy of data from a database. As long as all systems have the same copy, it doesn’t matter which one you will ask to insert a new record. Process 21 might receive it and then tell the Master process to do the same update. Once in a while, the Master process will send a complete list of all updates received to all other processes and they will all be synchronized again.

Things do become more complex when you’re sending a request for data to one of these processes. Because now you’re going to synchronize a request with a response. If process 21 receives the request, it can respond with data including the newly inserted record. But if process 36 receives the request, it still doesn’t know about the insert, thus you will receive outdated data.

Basically, this means that you will have to share updates between processes real fast. You could, for example, send out 40 messengers with carbon copies of the insert message, or you will send a single messenger with 40 carbon copies and let him go past all other Queens systems. Hopefully, all systems will receive the update in time, else the King might receive responses that don’t really match with one another. This synchronisation of incoming updates make things difficult when you have many processes that all do exactly the same job.

So, let’s focus on the Ministers instead. You have one for Foreign Affairs, one for Healthcare, one for Internal Affairs, one for Military purposes et cetera. Basically, they all have different jobs, although they do share some common knowledge. Here, you can send a messenger to a random Minister and the Minister will have to look at the envelope to see if he can do something with the message. If he can, the messenger becomes a citizen again and the Minister starts working. If not, he will tell the messenger to try the next Minister.

This too is a design pattern in the computer world. Basically, these are messages where trial-and-error is used to find a specific system which will be able to reply. With too many listeners there will be a serious delay until the message finally receives the proper place but then again, no one is expecting a reply to the message so there’s no real hurry.

So, when to use which system? Generally, when you’re dealing with users of your software, you have users who expect responses to most of their actions. But not all! For example, if a user requests a web page, they expect a page. But if they save a file to disk, they don’t want a response telling them it was successful. They’d only expect a response if something failed.

But when you design web sites, you enter a world which basically has mostly asynchronous systems. Asynchronous systems which have a synchronisation layer on top of them to make usage easier, but which will also cause delays. If speed is important, you would prefer to use asynchronous processes because many actions might not need an immediate response. Basically, this would allow you to send several messages to other processes, which in turn send new messages to more processes until one or more will start sending answers back so you can respond to the user.

And if you don’t get responses back in time, you can tell the user that you could not collect all data he wanted and possibly even tell him what data you’re still missing. The user will not be waiting forever, but the notification to the user could be that you’re processing his request and that you will tell him later when you do have the required data.

And this is practical when your user is asking for complex things like large reports in PDF format. It takes time to collect all data and then convert it to readable text. In an asynchronous system you would tell the different groups of data processes to collect the required data and to send it to the data-processing processes. Those would then convert the incoming data to practical tables and other overviews and send them to the report builder process. The report builder would then create pages for all processed data and turn them into pages of the report, and when all data has been received it can send the report back to the master server. With a bit of luck, the report can be part of the response. If not, the report will have to be stored until the system has a chance to tell the user that there’s a report available.

Fortunately, with modern web browsers that support HTML 5 you can use Web Sockets. With a web socket the web page can open a communication channel with the web server, thus when the web server receives the report, it can tell the user that the report is ready to be downloaded. With older browsers, you will have to fall back on polling mechanisms, where the web site makes continuous requests to the web server, asking “Is it there yet?” Annoying, but polling is the only solution with older browsers or servers that don’t support web sockets.

Still, many people don’t realise that even a web page uses asynchronous messages for its communication. It’s just that there’s a synchronisation layer on top of those messages which makes it seem synchronous. But if the server doesn’t respond in time, the browser will still tell you that it didn’t receive a reply thus it has nothing to show. The browser is just delaying its task while waiting for a response.

This is something you can see more clearly when you open a page with multiple images. In general, the browser will ask for the page, which will result in a response. But the browser notices it needs to load several images now, so it starts requesting those images, and while it’s waiting to download those images, the user will just see the “broken image” icon but he’s still able to use it.

The same is true when you include JavaScript files or stylesheets in your web page. The browser downloads your page but will notice that it needs to download more files before it can render the page for the user to view. It will try to allow the user to do something as soon as possible, yet it has to wait for all those responses or just generate a time-out when it runs out of time.

Like the King and Queens who talk with one another through asynchronous messages, so does the Internet communicate between servers. Systems will send messages and then wait until they know they have received the data they need or a specific time limit has passed. Keep in mind that there’s nothing really synchronous about the Internet, except that most users will think it is a synchronous process.

Time for a new web server…

Today I’ve received my new computer, which will be used as a Datacenter/web server. It’s an Asus P6-P7H55E and I will install Microsoft Windows 2012 Datacenter on it. It will be used mostly for my personal web experiments but it will be linked to my domain names.

I’m using Windows 2012 simply because I’ve received a license for this operating system as part of my MSDN subscription. It’s not meant to be used as a production server but as a development server, for testing purposes. I’m a Senior Software Developer so its perfect for me. I already upgraded it’s memory to 8 GB and wonder if the 512 GB disk space will be enough. Then again, I have plenty of external hard disks that can be used for extra storage.

It has an Intel Pentium E5800 onboard, which happens to be pretty decent. The system won’t be very powerful but then again, I don’t expect many visitors either. Those who do visit are most likely visiting my sites that are hosted somewhere else, like this blog. But for experimental purposes, it’s great. I hope, since I still need to set it up correctly. 🙂

I won’t be hosting my blog on it. My blog is nicely hosted on WordPress itself. I’m also not going to use it as mail server, since I use Google Apps for that purpose. And no, if I ever create a useful site that attracts hundreds of visitors every month, I’m probably not going to host it on this server either. Just my personal experiments, although these will be accessible from the outside.

The content of this web server won’t be very valuable, since I will do development on my other systems. And important data will also be stored on my other systems. I am considering to change my previous web server to an SQL Server system, almost completely dedicated to maintaining the more important databases on my system. Since my old web server will not be accessible from the outside, it would make my databases a bit more secure, although it also means that I have to keep two computers running continuously.

For now I still have plenty to do. It still sees only 4 GB RAM instead of 12 and it doesn’t seem to know it has a dual-core processor. And the remote desktop services aren’t operating properly yet. Plus, I need to give it a fixed IP address. And then I’ll have to migrate all projects that I consider important. Finally, I’d have to adjust my router to make sure the new web server will be used. And lots and lots more…

For now, I’m busy! Please, do not disturb… 🙂

The Wordchain data model.

In my post “Project born from pain…” I’ve started a new project, which now has the name ‘WordChain’. It basically chains words together. In my post “The first design of “WordChain”…” I worked out the first design of this project, with pen and paper. And now it’s about time to consider the data model for this project, which will be the most important part of the project. And to do this, I will have to look back at the design…

WordChain screenThe design has several areas that I’ve marked with letters. As it turns out, I detect 11 different items which indicates things aren’t going to be very complex. Some of this data is from the configuration while other data is from the database. And perhaps some data is a mixture.

So, back to the root of the project. The user selects a word and something about that word will be displayed. The word will be shown in B and is related to the description in G and the image in H. However, other words can also be related to the same description so we have an m:1 relationship between word B and C and description G. Each word has just one description, but descriptions have multiple words.

Description G will have a main image and possible other images that are shown in J. Since these images are most likely shared with other descriptions we have an m:1 relationship between description G and image H but an m:n relationship between description G and image J. Each description will have one main description and multiple other images while every image will have multiple descriptions.

The description will most likely relate to other words in the system, but those words would lead to other descriptions. Or they will lead to external sites. These words are most likely part of the description and will be part of other descriptions too. So we have a second m:n relationship between description G and word E and F.

The phrases in I would be ‘special’ words so they would have their own description. They should not be shown as aliases in C nor as words in E and F.

We also have the icon A but that should be part of the configuration, just like the footer K. But the icon A might be overridden by the description. So the description should include a reference to an icon.

And we have the tabs D and these are also part of the configuration. They will link directly to a specific word. Also keep in mind that the root page must also default to a specific word to be displayed, else the visitor has no starting point. This default will be considered an “invisible” tab, but technically we can link to it from icon A, making the icon the first of all tabs.

So we have A, D and K being part of the configuration. A would be a URL, D would be a list of words and K is an HTML formatted text. Next we have words and phrases B, C, E, F and I that are linked to description G through two different connections. One will indicate the description that describes the word, the other is for related words for a specific description. Finally we have the images H and J that are linked again to the description G.

This means we have a Word table for B, C, E, F and I, an Image table for H and J and a description table for G. And we have a configuration record for the default (root) word, icon A, tabs D and footer K. And in the background three tables to connect word to description, description to word and description to image for the multiple relationships.

As it turns out, this is a simple datamodel. However, words and phrases aren’t identical and must be treated differently. In an object model, this could be done by creating a base class and derive a Word class and a Phrase class. The base class would support the relations with the description and the child class determines where it will be shown.

A base class for description would also be practical since that would allow me to create more special descriptions like input forms and whatever else I’d like. This is something I will keep in mind for the future.

A base class for images would allow me to make sub classes if I want to add videos, music or other media. So that will be my third base class.

And with this information I can start Visual Studio and create the Entity Framework model. But that will have to wait since my back is killing me again…