Why social media aren’t happy with topless women in pictures…

People generally wonder why Facebook and Twitter seem to ban all forms of nudity, including the display of bare breasts. (Well, female breasts anyways.) Other sites have less troubles with displaying a bit of nudity. And people will always wonder why e.g. Facebook is that prudish. They even have troubles with pictures of women who are breastfeeding. Raevin WhiteBut on other sites they tend to have less troubles with the same type of content.

For example Tumblr has almost no restrictions to the material posted there, as long as it is legal to publish. On Twitter you’re allowed a bit more, like posting nipples in tweets. They won’t allow pornography, though. Many sites won’t, anyways. Still, there’s a good reason for this. The people who will join a specific site do so because of the generic content of the site.

Many social sites are aiming at teens and young adults and this means that the content needs to qualify to specific rules, especially if the site operates in the USA. For example, most people won’t be happy when their teens are visiting sites that has the occasional nude image. (Like this blog, for example.) They would block those sites, thus the site can’t target those teens with advertisements.

For Facebook, this would be a problem. Facebook has plenty of advertisements but also plenty of games that attract teens and young adults. They use Facebook to meet with friends, play games and whatever more. Thus, Facebook depends on this group of people and thus has to respond when people report “inappropriate material”. And because they have plenty of teens, they are extremely strict at that. Tumblr has less troubles with this. They make money from the bloggers themselves by offering premium services and premium themes. They also provide advertisements, although those are barely noticeable.

Tumblr doesn’t really target teens so the content can include nudity and even pornography. Because of that, it’s no surprise that you can find plenty of those on Tumblr.

And WordPress? Well, WordPress is available in several versions. You can host it on your own server, you can have it hosted by a service provider or you do as I do and let it be hosted by WordPress themselves. The hosted versions might be a bit more strict because the hosting provider has a reputation to keep up. Worse, since the blogger is paying the provider, the provider might prefer to have less visitors instead of many, to save bandwidth. Nude pictures are often large amounts of data and with many visitors the provider loses bandwidth.

Self-hosted WordPress sites have no restrictions, though. The worst thing that could happen is that police will confiscate your hardware and arrest you if you happened to host some illegal content.

So, one main reason to block nudity is because people don’t want their teen children being exposed to it. (While plenty of teens might actually be specifically looking for this material and might even exchange nude selfies with friends.) Social sites will have to know the type of visitors they generally have and adjust their content to those visitors.

At SecondLife, for example, the rules for content within the game were mostly quite relaxed. People were allowed a lot in their own lands, as long as it was marked as mature or adult. But SecondLife got into troubles after it was discovered that many underage teens would play the game too. And those teens were suddenly exposed to nudity, sex and a lot of other things. So they decided to create a separate version just for teens and kicked every teen from the adult world to the “nicer” teen world. And if new teens are discovered in the adult world, they too are kicked to the “kindergarten”.

And they banned most of the adult stuff from most areas except for the adult areas. Since you have to pay a lot to have an adult area, this meant that many people just left the game. SecondLife now has some competition because some developers started to create the OpenSimulator where people could just host their own second world on their own system.

This became even more complex after some groups started to combine forces and started hosting virtual world similar to SecondLife, but for much less money or even free. Because of this and the ban on adult material, SecondLife has lost a lot of people.

There are, of course, more reasons. Sites that want to have viewers in e.g. China need to be aware of the restrictions the Chinese government puts on content. No pornography and preferably no bashing of the government itself. Sites focussing on the USA might also block pornography because there are a lot of people in the USA whose religious views are against such images.

In the UK they’re even demanding that providers just block all pornography and adult sites, which led to plenty of protests because too much was blocked. So, sites who want to target citizens in the UK better clean themselves up so they will get past those (faulty) porn filters.

Again, Facebook belongs to those, thus they definitely want to stay clean. Basically, social sites have to choose between those who claim there’s too much nudity versus those who want to have more nudity. Some want more, others want less. And social sites just tend to listen to those who have the most power. Not the majority but those who have the biggest influence. And those would be the lawmakers.

For example, mentioning the Tiananmen Square protests of 1989 will likely get you banned in China. Not practical if you want to trade with people in China. Facebook has similar problems but all over the world. In too many countries the law puts some very strict restrictions on nudity. The USA and UK aren’t even the worst of them.

Facebook is also popular among Arab people, India and plenty of other cultures that frown upon female nipples. They want advertisers everywhere to pay them so they make a lot of profit and thus they have to give in to the demands of those lawmakers. Fortunately they also want to be in Europe so they can’t be too strict on their content, but still…

Nipples are banned because it might offend advertisers in certain areas. That would even apply to pictures of women breastfeeding their child. Male nipples are generally less offensive, though. So yes, there’s discrimination in the Facebook policies. But giving in to the demand to allow more nudity would cost them some of their advertisers, thus some of their revenue. It would only be worth their trouble if people would ban Facebook because of this strict policy.

Unfortunately, no advertiser is blocking Facebook because they don’t show enough nipples. And that’s why social media block nipples…

MtGox is close to bankrupt.

TodaY I received a PDF file called “Announcement of Commencement of Bankruptcy Proceedings_212014” And basically, it tells me that MtGox, a bitcoin market, is definitely going bankrupt. But that was to be expected. I have less than a single euro in bitcoins at MtGox I have no regrets for trying out their service. But plenty of other people have made big investments in bitcoins and stored them at MtGox. Chances are that they will have lost it all, since MtGox has plenty of bills it needs to pay first.

To make it more complex, its unclear if bitcoins can be considered equal to money or not. They’re just a collection of bytes in a specific order and format and they’re worth exactly what people are willing to pay for them. It will be interesting to see what the Japanese court system will think of the value of bitcoins. People might still get their bitcoins if the Liquidator thinks they’re worthless. But if the system in Japan is similar to the Netherlands, that Liquidator could just auction off all bitcoins that MtGox still have to pay off the debts. The remaining cash would then be compensation for anyone who had their bitcoins stored there.

Of course, plenty of other countries (the USA and UK) are probably willing to dig into the action and try to get some financial compensation too. Plenty of American people have lost a lot of money because of this. But the Japanese government goes first and all others have to pick the remaining bones. And I don’t think there will be a lot of meat left on those bones…

The lesson learned from this is, of course, that bitcoins aren’t that safe. Especially if you have them stored at some bitcoin site as MtGox. You are losing control over your money and considering how much bitcoins have been worth in the past, being careless with them can cause a big financial blow. Then again, people can also lose bitcoins if they store them on their own systems. Bitcoins on your phone can get lost if your phone is stolen or damaged. Bitcoins on your computer are always at risk of getting wiped away. I’ve heard of one guy who threw away his old laptop and later learned that he had a few thousands of bitcoins on it, each worth over $1,000 in cash! A very expensive mistake, although he had mined them himself so he did not really lose money. He just made no profits from the mining.

So, please consider what you’re doing when you will use some crypto-money like bitcoins. Make sure you’re well-informed and don’t buy them in large quantities if you just want to save your money somehow. It’s better to just start mining them yourself so your losses can be under control.

And yes, banks can go bankrupt too, but crypto-currency is a bit more riskier since there’s no proof to tell that you really owned them. Once they’re gone, you won’t get them back. This is still something that you should leave to true pioneers who are willing to take risks.

The email itself:

関係人各位

株式会社MTGOX(以下「MTGOX」といいます。)につき、平成26年4月24日午後5時00分、東京地方裁判所より破産手続開始決定がなされ、当職が破産管財人に選任されました(東京地方裁判所平成26年(フ)第3830号)。
今後、破産管財人において、MTGOXの財産管理換価、債権調査等の破産手続を遂行していきます。
つきましては、関係者に対する情報提供を目的として、破産手続に関する基本的事項を添付のとおりお知らせいたしますので、ご確認ください。

なお、このメールアドレス(mtgox_trustee@noandt.com )は破産管財人からの送信専用であり、貴殿が本メールアドレス宛の返信等をされても内容確認及び回答などの対応はできません。
破産手続の進行等については、ウェブサイト( http://www.mtgox.com/ )で情報提供をする予定ですので、当該ウェブサイトをご確認ください。
宜しくお願いいたします。

破産者株式会社MTGOX  破産管財人弁護士小林信明


To whom it may concern,

At 5:00 p.m. on April 24, 2014, the Tokyo District Court granted the order for the commencement of the bankruptcy proceedings vis-à-vis MtGox Co., Ltd. (“MtGox”), and based upon such order, I was appointed as the bankruptcy trustee (Tokyo District Court 2014 (fu) no. 3830).
The bankruptcy trustee will implement the bankruptcy proceedings, including the administration and realization of the assets and investigation of the claims.
For the purpose of providing information to the related parties, we hereby inform you of the basic matters regarding the bankruptcy proceedings as attached.

This email address(mtgox_trustee@noandt.com) is used only for the purpose of sending messages, and we are unable to check and respond to any replies to this email address.
Since we plan to provide the information regarding the bankruptcy proceedings by posting it on the website hosted by the bankruptcy trustee ( http://www.mtgox.com/ ), please check this website.

Bankrupt MtGox Co., Ltd. Bankruptcy trustee Attorney-at-law Nobuaki Kobayashi

Betaalverzoek inzake CJIB

Once more some stupid spammer trying to get people to pay them lots of money. It was sent to my sister who could not understand how she had to pay so she asked me how. I quickly discovered that this is a big scam and told her so. And I’m posting it here to warn other people about this scam too and how scammers try new tricks every time hoping for the suckers who are scared enough to pay.

Since this scam was written in Dutch, I will continue in the Dutch language.


Clip

Mijn zus ontving vandaag deze email van het “CJIB” betreffende een verkeersboete van 155 euro. Het dreigt ermee dat haar bankrekening wordt geblokkeerd met ingang van 13 mei, wat dus al gebeurd zou zijn. Ze moet voor 19 mei betalen, dus op de dag dat ze de email ontving. En ja, dat is de manier waarop spammers proberen om hun slachtoffers mee onder druk te zetten zodat ze betalen zonder na te denken.

Wat belangrijk is, is hoe de spammers aanwijzingen geven om een prepaid credit card aan te schaffen om zo de boete mee te betalen. Vervolgens moet je naar een site toe, waar geeneens een domeinnaam aan hangt. Het is een URL met IP adres 153.122.39.197 en daarbinnen een folder. Daar zie je vervolgend een vrij kaal scherm met een betaalknop.

Clip_2Clip_3Clip_5Klik je vervolgens verder dan krijg ik met Google Chrome al een waarschuwing dat de site is geblokkeerd wegens phishing. Ik neem even het risico en kom bij het volgende plaatje. Daar moet de 3B pincode worden ingevuld, waarna de oplichter de gehele creditcard kan leeghalen. Wie uiteindelijk een 19-cijferig nummer invoert krijgt vervolgens een pagina te zien die aangeeft dat de betaling succesvol was (terwijl ik een willekeurig nummer gebruikte) en ik zal binnen drie tot 5 dagen bericht krijgen van de belastingdienst.

Belastingdienst?

Het bedrag van 155 euro komt mooi overeen met de hoogste waarde van de betreffende maatschappij. Gelukkig hebben ze al door dat er dergelijke nepmails over het Internet gaan zodat iedereen op Beltegoed Opwaarderen daar nog eens de waarschuwing over deze oplichterij te zien krijgt.

Clip_4

Jammer dat de waarschuwing onder de betaalknoppen staat en niet erboven, waar ze nog beter opvallen. Maar iedereen zou dit toch als een waarschuwing moeten zien. Hopelijk is het duidelijk genoeg maar er zullen altijd mensen zijn die in dit soort oplichterij trappen.

Hoe komt het dat er zoveel mensen in trappen? Dat is heel simpel. Dergelijke berichten worden vaak naar grote aantallen adressen verstuurd. Als 1% van de bevolking er in trapt en ze versturen het naar 100.000 adressen dan zijn dat toch al weer 1.000 slachtoffers. En dat maal 150 euro maakt het een winstgevende actie, maar wel illegaal. Gelukkig is het percentage slachtoffers nog veel lager dan 1% maar al zijn er 10 slachtoffers in die grote groep, het geld komt dan wel binnen met relatief weinig moeite.

Hoe kun je je wapenen tegen deze oplichters? Eigenlijk moet je daarvoor gewoon goed opletten en goed weten hoe bepaalde bedrijven en organisaties werken. Het CJIB zal echt niet via prepaid creditcards betaald willen worden. Het CJIB zal sowieso nooit via het Internet boetes proberen te innen.

Dergelijke constructies zijn vooral bedoeld om geld weg te sluizen zodat het slachtoffer er niet meer bij komt. Je bent het geld gewoon kwijt zodra je op deze manier hebt betaald. Ook de creditcard maatschappij kan het niet terugkrijgen omdat ze het beltegoed erop gebruiken om bijvoorbeeld een duur 06-nummer mee te bellen. Dan is de creditcard leeg en ligt het geld bij een telefoon maatschappij die het weer moet doorbetalen aan een bel-bedrijf. En van daar gaat het geld weer verder weg van het slachtoffer.

Wat ook van belang is, is dat de site nergens om mijn persoonlijke gegevens vraagt. Deze staan zelfs niet in de email. Het is gericht aan de bestuurder, zonder zelfs een nummer van een kentekenplaat te vermelden. Dat kunnen de oplichters ook niet want ze hebben deze gegevens niet. Als iemand een rekening per email verstuurt dan zou je toch meer gegevens in de email verwachten. Het gebrek aan deze persoonlijke gegevens is ook een waarschuwing.

Wie technisch iets handiger is kan ook nog eens naar de ‘headers’ van de email kijken om te bepalen waar de email vandaan komt. En dan blijkt dat de email afkomstig is van hetzelfde IP adres als de site zelf. Een adres dat ergens in Japan te vinden is. Mogelijk een Japanse computer die onderdeel is geworden van een botnet en dus misbruikt wordt zonder dat de eigenaar dit beseft. Om de oplichter te vinden is dit dus geen behulpzame manier. Daarvoor zul je het geld moeten volgen…

Maar sowieso moet je altijd oppassen met verzoeken tot betalen per email. Eigenlijk zou je dat standaard moeten weigeren, tenzij je zeker bent dat het iets betreft dat je nog moet betalen.

Nu nog even de volledige email zoals deze is ontvangen via de hotmail account van mijn zuster:

x-store-info:4r51+eLowCe79NzwdU2kRyU+pBy2R9QCj0/8P6fDMVumMo6iGJG5XQGQsGw4y+KC5jGdX6A7+/ZVHRw3c8psWXtc+cAfssqe5kw3LdG9RbC+kh049fg5aL5vFishJNonRedbn/JCR2Y=
Authentication-Results: hotmail.com; spf=none (sender IP is 153.122.39.197) smtp.mailfrom=cjibnoreply@cjib.nl; dkim=none header.d=cjib.nl; x-hmca=none header.id=cjibnoreply@cjib.nl
X-SID-PRA: cjibnoreply@cjib.nl
X-AUTH-Result: NONE
X-SID-Result: NONE
X-Message-Status: s1:n
X-Message-Delivery: Vj0xLjE7dXM9MDtsPTA7YT0wO0Q9MjtHRD0yO1NDTD02
X-Message-Info: OR3oMfwJnYHF1wanhF69C9Yey20TK9h7x9GWXuv5yaEGAfYu81s5sUj6V3GqMLsbaFOGIxV4jNuK1YTPnnwB8khYxF5czLKOeqtp5CEeiwA6KP8+eQfiSR4aZ+C9AR+10UtHFivL+rY5J1BgXCW7aHs
+IXGFCGuG7VDEq8ZxsEs1ttSXkle85ecru4AU5KBKfNEdJylVvJENsulQeQGWmUjowK3sd7ew
Received: from vps1.cpanel.net ([153.122.39.197]) by BAY0-MC6-F21.Bay0.hotmail.com with Microsoft SMTPSVC(6.0.3790.4900);
Fri, 16 May 2014 18:16:02 -0700
Received: from [62.140.132.229] (port=27929 helo=newran)
by vps1.cpanel.net with esmtpa (Exim 4.82)
(envelope-from <cjibnoreply@cjib.nl>)
id 1WlTE6-0002gc-Bo; Sat, 17 May 2014 10:15:51 +0900
Reply-To: <noreply@cjib.nl>
From: “Centraal Justitieel Incassobureau”<cjibnoreply@cjib.nl>
Subject: Betaalverzoek inzake CJIB
Date: Sat, 17 May 2014 03:15:51 +0200
MIME-Version: 1.0
Content-Type: multipart/related;
boundary=”—-=_NextPart_000_0040_01C2A9A6.59B75712″
X-Priority: 3
X-MSMail-Priority: Normal
X-Mailer: Microsoft Outlook Express 6.00.2600.0000
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2600.0000
X-AntiAbuse: This header was added to track abuse, please include it with any abuse report
X-AntiAbuse: Primary Hostname – vps1.cpanel.net
X-AntiAbuse: Original Domain – hotmail.com
X-AntiAbuse: Originator/Caller UID/GID – [47 12] / [47 12]
X-AntiAbuse: Sender Address Domain – cjib.nl
X-Get-Message-Sender-Via: vps1.cpanel.net: authenticated_id: newran/only user confirmed/virtual account not confirmed
Bcc:
Return-Path: cjibnoreply@cjib.nl
Message-ID: <BAY0-MC6-F21LjANJQ000b8ac21@BAY0-MC6-F21.Bay0.hotmail.com>
X-OriginalArrivalTime: 17 May 2014 01:16:02.0669 (UTC) FILETIME=[91B0C9D0:01CF716D]

This is a multi-part message in MIME format.

——=_NextPart_000_0040_01C2A9A6.59B75712
Content-Type: text/html;
charset=”Windows-1251″
Content-Transfer-Encoding: 7bit

<HTML><HEAD><TITLE></TITLE>
</HEAD>
<BODY bgcolor=#FFFFFF leftmargin=5 topmargin=5 rightmargin=5 bottommargin=5>
<FONT size=2 color=#000000 face=”Arial”>
<DIV>
<IMG align=middle border=0 width=400 height=69 src=”cid:00E9BAC800C5$03195E81$0100007f@uhxyhwczmgwjdgc”></DIV>
<DIV align=center>
&nbsp;</DIV>
<DIV align=center>
&nbsp;</DIV>
<DIV>
&nbsp;</DIV>
<DIV>
Geachte bestuurder,</DIV>
<DIV>
&nbsp;</DIV>
<DIV align=center>
&nbsp;</DIV>
<DIV>
U hebt een beschikking en vervolgens twee aanmaningen ontvangen voor het overtreden van een verkeersvoorschrift.</DIV>
<DIV>
Het openstaande bedrag is niet volledig op de rekening van het Centraal Justitieel Incassobureau (CJIB) bijgeschreven.</DIV>
<DIV>
Daarom zullen wij de bank opdracht gegeven uw rekening te blokkeren per dinsdag 13 mei 2014.</DIV>
<DIV>
Alleen persoonlijk bij het BKR zelf kunt u inzage krijgen in de informatie die het BKR over u ontvangt.</DIV>
<DIV>
Het blokkeren van rekening betekent dat de toegang tot uw rekening geblokkkeerd is met ingang 13-05-2014 voor een periode van vier werken.</DIV>
<DIV>
&nbsp;</DIV>
<DIV>
&nbsp;</DIV>
<DIV>
Met de 3v online krediet kunt u online op onze website de betaling voldoen. U dient hieronder te klikken op<B><I> </B></I><I>3v credit kopen</I> .</DIV>
<DIV>
<B>&nbsp;</B></DIV>
<DIV>
<B> </B></DIV>
<DIV>
<A href=”http://beltegoedopwaarderen.nl/3v”><FONT color=#0000FF><B><U>3v</B></U></FONT></A><A href=”http://beltegoedopwaarderen.nl/3v”><FONT color=#0000FF><B><U> credit
kopen</B></U></FONT></A></DIV>
<DIV>
<B> </B></DIV>
<DIV>
Let op: nadat uw de 3v (prepaid credit) heeft gekocht dient u de 19 cijferige nummercode hieronder te activeren om de betaling te voldoen.</DIV>
<DIV>
Klik hieronder op <I>aanmaning betalen</I><B><I>.</B></I></DIV>
<DIV>
<B>&nbsp;</B></DIV>
<DIV>
<B>&nbsp;</B></DIV>
<DIV>
<A href=”http://153.122.39.197/~newran/”><FONT color=#0000FF><B><U>Aanmaning betalen</B></U></FONT></A></DIV>
<DIV>
Het volledige bedrag van Eur 155,00 (inclusief kosten) moet uiterlijk 19-05-2013 worden betaald. Doet u dit niet, dan wordt u per 19-05-2014 geregisteerd bij BKR.</DIV>
<DIV>
Voorkom blokkade van uw rekening.</DIV>
<DIV>
&nbsp;</DIV>
<DIV>
<B> </B></DIV>
<DIV>
<B> </B></DIV>
<DIV>
Hoogachtend,</DIV>
<DIV>
<IMG align=middle border=0 width=120 height=60 src=”cid:00C18EFDDDDC$00C87F7D$0100007f@uhxyhwczmgwjdgc”></DIV>
<DIV>
Centraal Justitieel Incassobureau.</DIV>
<DIV>
<B>&nbsp;</B></DIV>
<DIV align=center>
&nbsp;</DIV>
<DIV align=center>
&nbsp;</DIV>
<DIV align=center>
&nbsp;</DIV>
</FONT>
</BODY></HTML>

——=_NextPart_000_0040_01C2A9A6.59B75712
Content-Type: image/jpeg;
name=”2007-04-05_handtekening.jpg”
Content-Transfer-Encoding: base64
Content-ID: <00C18EFDDDDC$00C87F7D$0100007f@uhxyhwczmgwjdgc>

[SNIP – Some UUEncoded data]

——=_NextPart_000_0040_01C2A9A6.59B75712
Content-Type: image/jpeg;
name=”download.jpg”
Content-Transfer-Encoding: base64
Content-ID: <00E9BAC800C5$03195E81$0100007f@uhxyhwczmgwjdgc>

[SNIP – Some UUEncoded data]

——=_NextPart_000_0040_01C2A9A6.59B75712–

 

A very generic datamodel.

I’ve come up with several projects in the past and a few have been mentioned here before. For example, the Garagesale project which was based on a system I called “CART”. Or the WordChain project that was a bit similar in structure. And because those similarities, I’ve been thinking about a very generic datamodel that should be handled to almost any project.

The advantage of a generic database is that you can focus on the business layer while you don’t need to change much in the database itself. The datamodel would still need development but by using the existing model, mapping to existing entities, you could keep it all very simple. And it resulted in this Datamodel:ClassDiagram(Click the image to see a bigger version.)

The top class is ‘Identifier’ which is just an ID of type GUID to find the records. Which will work fine in derived classes too. Since I’m using the Entity Framework 6 I can just use POCO to keep it all very simple. All I have to do is define a DBContext that tells me which tables (classes) I want. If I don’t create an entry for ‘Identifier’, the table won’t be created either.

The next class is the ‘DataContent’ class, which can hold any XML. That way, this class can contain all information that I define in code without the need to create new tables. I also linked it to a ‘DataTemplate’ class which can be used to validate the content of the XML with an XML schema or special style sheet. (I still need to work out how, exactly.) The template can be used to validate the data inside the content.

The ‘BaseItem’ and ‘BaseLink’ classes are the more important here. ‘BaseItem’ contains all fixed data within my system. In the CART system, this would be the catalog. And ‘BaseLink’ defines transactions of a specific item from one item to another. And that’s basically three-fourth of the CART system. (The template is already defined in the ‘DataTemplate’ class.)

I also created two separate link types. One to deal with fixed numbers which is called ‘CountLink’ which you generally use for items. (One cup, two girls, etc.) The other is for fractional numbers like weights or money and is called ‘AmountLink’. These two transaction types will be the most used transaction types, although ‘BaseLink’ can be used to transfer unique items. Derived links could be created to support more special situations but I can’t think of any.

The ‘BaseItems’ class will be used to derive more special items. These special items will define the relations with other items in the system. The simplest of them being the ‘ChildItem’ class that will define more information related to a specific item. They are strongly linked to the parent item, like wheels on a car or keys on a keyboard.

The ‘Relation’ class is used to group multiple items together. For example, we can have ‘Books’ defined as relation with multiple book items linked to it. A second group called ‘Possessions’ could also be created to contain all things I own. Items that would be in both groups would be what is in my personal library.

A special relation type is ‘Property’ which indicates that all items in the relation are owned by a specific owner. No matter what happens with those items, their owner stays the same. Such a property could e.g. be a bank account with a bank as owner. Even though customers use such accounts, the account itself could not be transferred to some other bank.

But the ‘Asset’ class is more interesting since assets are the only items that we can transfer. Any transaction will be about an asset moving from one item to another. Assets can still be anything and this class doesn’t differ much from the ‘BaseItem’ class.

A special asset is a contract. Contracts have a special purpose in transactions. Transactions are always between an item and a contract. Either you put an asset into a contract or extract it from a contract. And contracts themselves can be part of bigger contracts. By checking how much has been sent or received to a contract you can check if all transactions combined are valid. Transactions will have to specify if they’re sending items to the contract or receiving them from the contract.

The ‘BaseContract’ class is the more generic contract type and manages a list of transactions. When it has several transactions, it is important that there are no more ‘phantom items’. (A phantom item would be something that’s sent to the contract but not received by another item, or vice versa.) These contracts will need to be balanced as a check to see if they can be closed or not. They should be temporary and last from the first transaction to the last.

The ‘Contract’ type derived from ‘BaseContract’ contains an extra owner. This owner will be the one who owns any phantom items in the contract. This reduces the amount of transactions and makes the contract everlasting. (Although it can still be closed.) Balancing these contracts is not required, making them ideal as e.g. bank accounts.

Yes, it’s a bit more advanced than my earlier CART system but I’ve considered how I could use this for various projects that I have in mind. Not just the GarageSale project, but also a simple banking application, a chess notation application, a project to keep track of sugar measurements for people with diabetics and my WordChain application.

The banking application would be interesting. It would start with two ‘Relation’ records: “Banks” and “Clients”. The Banks relation would contain Bank records with information of multiple banks. The Clients relation would contain the client records for those banks. And because of the datamodel, clients can have multiple banks.

Banks would be owners of bank accounts, and those accounts would be contracts. All the bank needs to do is keep track of all money going in our out the account. (Making money just another item and all transactions will be of type ‘AmountLink’.) But to link those accounts to the persons who are authorized to receive money from the account, each account would need to be owner of a Property record. The property record then has a list of clients authorized to manage the account.

And we will need six different methods to create transactions. Authorized clients can add or withdraw money from the account. Other clients can send or receive payments from the account, where any money received from the contract needs to be authorized. Finally, the bank would like to have interest, or pays interest. (Or not.) These interest transactions don’t need authorization from the client.

The Chess Notation project would also be interesting. It would start with a Board item and 64 squares items plus a bunch of pieces assets. The game itself would be a basic contract without owner. The Game contract would contain a collection of transactions transferring all pieces to their first locations. A collection of ‘Move’ contracts would also be needed where the Game Contract owns them. The Move would show which move it is (including branches of the game) and the transactions that take place on the board. (White Rook gone from A1, White Rook added to A4 and Black pawn removed from A4, which translates into rook takes pawn at A4.)

It would be a very complex way to store a chess game, but it can be done in the same datamodel as my banking application.

With the diabetes project, each transaction would be a measurement. The contract would be owned by the person who is measuring his or her blood and we don’t need to send or receive these measurements, just link them to the contract.

The WordChain project would be a bit more complex. It would be a bunch of items with relations, properties and children. Contracts and assets would be used to support updates to the texts with every edit of a WordChain item kicking the old item out of the contract and adding a new item into the contract. That would result in a contract per word in the database.

A lot of work is still required to make sure it works as well as I expect. It would not be the most ideal datamodel for all these projects but it helps me to focus more on the business layer and the GUI without worrying about any database changes. Once the business model becomes more advanced, I could create a second data layer with a better datamodel to improve the performance of the data management.

 

 

 

Adventures in 3D.

I remember when Blender became first available for me. It was a 3D rendering engine and it looked fun, so I downloaded it, installed it and tried it. This was somewhere around 1999 and I still had a lot to learn back then. Still, I did not like the user interface of Blender (and still don’t) and I considered it too complex and not useful enough for myself so I soon forgot about it again. I still was interested in rendering 3D images, but I also wanted something simpler.

So, around 2004 I purchased a copy of Poser and it had the user-friendliness that I was looking for. I needed to collect all kinds of models, though. But by using models I could create some interesting images and could use my own CGI artwork instead of my own photographs for the software development that I like to do.

Being able to generate your own artwork for your applications is a better option than depend on stock material or purchasing/hiring others to make it for you. I don’t want to violate copyrights of others, but when you create websites, you need some graphical parts too and I needed to be my supplier of these images. Buttons were easy, since Paint Shop Pro and other 2D software had plenty of functionality to create them. But more complex things like showing a person behind a computer either required taking pictures or rendering a 3D model. Poser made the second option available to me.

When Second Life became hot, I also played a bit with that. Here is a 3D environment where you can build 3D objects simply by combining several basic shapes, or prims. (From primitives.) The game gave me a more comfortable feel around 3D environments and made me wanting even more.

And now its 2014. I have a piece of land in Second Life where I can build all kinds of things. I use the Firestorm viewer which allows me to exports my own objects from Second Life to use in other 3D software and from there I can continue to change them even further. Second Life also allows me to import back those objects I’ve exported and modified and allows me to import other objects from 3D software, although it does have a lot of problems with many of those models. Unfortunately, Second Life isn’t very clear when it reports errors and doesn’t seem to be able to simply fix some problems during import.

But in all this time, I’ve gotten a nice collection of 3D software which I will mention now, including where you can find it and what I think about it. All software I have are used on Windows systems.

Blender

Blender is a very popular product but I consider the user interface a bit complex. Too many buttons and options are polluting the screen and make it difficult to understand. To make things worse, it’s user interface behaves different from standard Windows user interfaces. Dialog boxes tend to appear anywhere, with plenty of different options instead of Yes/No or Ok/Cancel. Information is visible all over the screen so you have to look everywhere to find it. It’s just not intuitive, which is probably because this is an open-source collaboration between many developers who each left their own marks on the application.

Personally, I think the Blender user-interface needs a complete rewrite…

POV-Ray

POV-Ray is another 3D render engine and even older than Blender. POV-Ray uses scripts instead of a 3D graphical environment thus it’s not easy to use if you want to generate some 3D model. You just have to write each line in code for this software. Fortunately, there are plenty of 3D modelling applications that you can use to generate POV-Ray scripts. One of them is:

AC3D

AC3D is a commercial product that makes 3D modelling quite easy. Not as easy as Poser or Second Life, but it has plenty of good features. It’s user interface could use some sanitation, though. On my dual-monitor setup, some of the dialog boxes tend to pop up on the wrong monitor. But it’s very practical and supports several 3D image formats. For all others, you might want something that’s able to convert many different formats. Something like the Online 3D Model Converter or an application like:

AccuTrans 3D

AccuTrans 3D supports a few 3D image formats, allowing you to convert your models between different applications. This software also allows you to make some simple modifications to your models and I’ve used it to convert my Poser models to a format that Second Life understands. During this conversion, I also merge the parts of my models that all use the same texture, thus making the models simpler within Second Life. Of course, there’s an alternative that’s free:

MeshLab

MeshLab is open-source, but it has a clear user interface. It still has a few flaws, though. For example, it’s a bit slow compared to AccuTrans 3D. And it fails to correctly import some of my models correctly. It also fails to generate an export file that Second Life can read correctly, thus I need AccuTrans 3D to create those. (And even then Second Life tends to have problems importing them.)

Still, MeshLab is useful and allows you to make several changes to your models. But if you want to put models in proper poses, you will need:

Poser and Poser Pro

Poser is my favorite tool to create 3D models to use within my software. If I need a model of a person behind a computer, I can make it within 20 minutes with Poser. Just take a model of a person, add clothing models and a computer model, perhaps a desk and chair model and start rendering. It is very easy to use and it can import models created by other applications, although those will be less flexible than regular Poser models.

Another application that can be used with Poser models is:

DAZ Studio

DAZ Studio is free, thus making it very popular. It uses the same models as Poser does and DAZ also sells those models! Thus, DAZ has become very popular as supplier of Poser models.

But maybe it’s because I’m too used to Poser already, but I don’t like the user interface of DAZ Studio. To make it worse, I’ve tried to open some of my Poser models with DAZ Studio, only to discover that DAZ Studio did not accept many of the changes I’ve made to the models. Body parts were reset to their default shapes and it just did not look right.

Still, if you use Poser or DAZ Studio to render some new images, you’ll often want to have some interesting background too. Indoors settings aren’t much of a problem but outdoor images would need a more complex environment. One solution would be:

Bryce Pro

Bryce can make some great environments, although it seems to be missing some functionality. It also looks very small on my screen with a resolution of 1920×1200. While the results look very good, the user interface is less practical than the alternative:

E-on Vue

I use Vue a lot to render models that I’ve created with Poser. The reason for this is because Vue generates very good environments while Poser creates fine models. I could use Poser to render those models, but the lack of a good environment makes them look a bit boring.

Still, one problem with Vue is that it cannot export my generated environments for use in other software. Although Vue does have an export option, it also has many models that are not allowed to be exported. Thus you can create a nice sea, with boats and an island, and try to export it only to discover that you can export just one tiny rock from the whole scene. Vue is also quite expensive, compared to Bryce.

There is far more 3D software available, for all kinds of purposes. DAZ Studio also has Hexagon and Carrara:

Hexagon

Hexagon is just another tool to create models in 3D. I like to use it and have created a few things with it, but it tends to crash a lot. It’s not reliable enough for big projects because it can unexpectedly crash while you’re working on some project. While it is very user-friendly, the instability is just annoying.

Carrara

Carrara is similar to DAZ Studio and Poser, since it’s meant to put models in certain poses. But it combines this with landscape modelling, making it more useful. It has a simple interface, making it very practical to use. Less is more. Well, at least for user interfaces. Users tend to get lost in very busy interfaces.

Carrara can use Poser models and more. It can import templates I’ve created based on Poser models, although it doesn’t always succeed at importing Poser scenes. It can export to a format that Second Life should be able to read, but this too has some incompatibilities. Second Life is just too picky.

Second Life

It’s easy to forget but Second Life itself is also very capable of building 3D images. And it seems to be very user-friendly at this too, since it does so in an interactive way with the user. You have an avatar that can walk or fly around the object, which helps you to create models at a nice scale. It supports several primary shapes that can be used to build more complex items. It also allows great control over textures on your objects.

However, to build objects in Second Life, you need some land where you can build. This happens to be limited to certain areas, unless you yourself own some land. You also have to pay small amounts to upload images to the Second Life environment, making it costly in usage. So, there is an alternative:

OpenSimulator

The OpenSimulator is an alternative for Second Life. It’s open-source, thus free. But it can be used by the same viewers that are used for Second Life. It is a bit complex to set up your own simulations and OpenSimulator itself lacks a useful graphical interface. For this, you need a special viewer:

FireStorm

FireStorm happens to be a great viewer for both Second Life and OpenSimulator. While Second Life has its own viewer, FireStorm has some more advanced features and can be used for OpenSimulator. You can use it to build objects within Second Life or OpenSimulator and then export these for usage in other 3D software. Thus you could use Second Life to make a building or fortress and export it and use it in Poser with some models around it.

There are more viewers available for Second Life and OpenSimulator, but I would recommend to use Firestorm.

VastPark

One more simulator. Unlike Second Life, VastPark seems to focus more on businesses who want to make more interactive presentations. And what better to use for this than a virtual environment.

But like OpenSimulator, you can’t really use this without first generating the virtual environment. This takes time and some skills with 3D images. You need to create models and create textures for those models, else it’s just a lot of white on white…

VastPark could also be used to create complex animations by recording the actions within the virtual world. This would be useful for creating training material or support documentation of special events, like car accidents or office fires.

LightWave

I haven’t used LightWave but it looks quite nice. However, I use the LightWave file format as export format for Poser. I then convert those with AccuTrans 3D to the Collada file format, which Second Life can import. The only problem is that Poser models are extremely detailed because they are used to generate highly detailed images. Second Life can’t really handle that much details and often fails to import these models. I can use AccuTrans 3D to split the Poser model in several parts and import those parts one by one, which seems to have a better effect. However, the models that you will import this way in Second Life eat away a lot of your land usage, thus you need a large piece of land. Or your own simulation!

FreeCAD

FreeCAD is just another modelling tool. It has some good examples but it lacks some practical functions. However, missing functionality can be added through plug-ins. It is a good tool to combine with POV-Ray. It can do a lot based on the design mode that you’ve selected.

DeleD

DeleD is another modeller, which is more used for game development. It is useful for simpler objects, not Poser models. It works a bit like Second Life, where you select cubes, spheres and other primitives to build more complex objects.

Speaking of game development, there are also libraries for developers that can help them to create their own 3D software. For example:

Horde 3D

This is an open source 3D rendering engine, to be used in games and 3D applications. It has been created for speed, thus less practical if you want to generate highly detailed images. But in a game, you want animations, and you want them in real-time, running smoothly.

Ogre 3D

Ogre 3D is another 3D rendering engine, written in C++ and with wrappers for use with Python, C# and Java. It too is great to use with games and other interactive environments. It also supports Linux, iOS, Android, WinRT and the Mac OS X. Basically, it’s a library around the OpenGL specifications.

OpenGL

OpenGL isn’t really an application but today, it is part of almost every computer that has a graphics card. The Khronos Group is responsible for maintaining this standard, thus every graphics card can be used by the OpenGL protocol. (At least, if the manufacturer added the support for OpenGL.) Most 3D software relies on OpenGL to display its graphics, although there are plenty of games that use DirectX instead. However, DirectX is an API created by Microsoft to be used for Windows applications only. Thus, many developers are focusing more on OpenGL while Microsoft seems to try to push them back to DirectX.

Oculus VR

The greatest dream of 3D will be the Oculus Rift, a special piece of hardware that’s supposed to give you a 3D virtual environment. Basically, it’s made of two screens, each of them showing you a scene from a slightly different angle. Since each eye will only see one screen, your brain will see the virtual world in 3D. (Unless you’re a cyclops.) It will respond on the movements of your head and development for this device will ask a lot from future developers. The 3D worlds are arriving for consumers and companies. It’s still mostly eye-candy to have nice, 3D environments. Development for such 3D worlds is more complex than having a simple web page with text on it. It will need to conquer its place in this world.

However, there’s also development done on 3D televisions and monitors that would not require special glasses to view its content. If such a device would hit the market, then 3D development would become even more important…

So, developers… Prepare to go 3D!

Great photography, licensed or self-made…

The Internet has become extremely important in our daily lives. And more importantly, the Internet requires many developers to think more graphically. Twenty-five years ago, computers were mostly text-based with some little graphics. The Internet was about to be born and graphics was mostly restricted to small icons and images with a limited amount of colors. When you were lucky, your graphics card would be a VGA card, able to handle images with 256 colors at resolutions of 640×480 pixels. A need for a graphic standard was required back then and a few new formats were born.

The PCX format, created by the now-defunct Zsoft Corporation, turned out reasonable successful because it supported up to 256 colors with an extra color palette that allowed the selection of 256 colors from any of the true-color images. It also supported data compression, making it reasonable small. Yet the decompression method was pretty fast, thus the processor would not need to work hard to display the image.

The PCX format has extended to true-color more recently but the JPG format turned out to be a better format. Since processors started to improve their performance, the more complex compression of the JPG format was fast enough to use and resulted in smaller files, although the images would lose some details.

Another popular format was the GIF format, that allowed images with 255 colors plus a transparent layer. (Or 256 colors without transparency.) This format is still popular since it’s great for logos and cartoons and it allows animations. And the compression of GIF files would reduce the image considerably in size without losing any details.

The PNG format has become more popular and was created as successor of the GIF format. It was needed because modern graphics required more colors and there was a demand for a better transparency layer. The PNG format uses 24-bits or 48-bits for its colors allowing more colors than the human eye can detect, plus an alpha channel (24-bits only) allowing images to define the transparency level of each pixel to be anything between transparent and opaque. This was great to e.g. create dirty glass windows or thin, silk nightgowns as graphics.

There are, of course, many other graphic formats but I want to talk about art, not formats. And this time, I want to talk about Pavel Kiselev, also known as photoport (NSFW), who likes to create glamorous pictures of pretty women. Today, he posted this picture of Irene, of one of his models. (I’ve licensed it for personal use, and this is my personal blog so it should be okay.)

IreneAnd this is the kind of photography that I love to see. Should I say more?

Well, okay… I do have to keep in mind that I wanted to relate this to software development so I should not distract myself by continuously looking in those pretty eyes. 🙂 So, back to the software development part…

When you’re designing websites, you have to keep in mind that you will need a lot of graphics. Something simple like an icon to display in the browser is already a requirement these days, else people have some trouble finding your site among their favorites. They can, of course, read the labels in the menu but most people will glance over all icons first and clicking on the icon that they recognise as your icon. Without the icon, they have more trouble finding you so never forget to add a Favicon to your site! Something that people will easily recognize as your brand.

Next, your site will need a logo and a background image. Or at least a logo. The best logos are PNG or GIF images, because they are small and allow transparency. The image of Irene would be bad as logo since it’s big and has a lot of bytes. When people visit your site with a slow internet connection, it would just look bad if the logo takes too long to download. Thus, keep it small yet detailed enough to be recognisable.

The background image might be bigger, unless you’re designing websites for mobile devices. For mobile devices, no background image would be better since it will take less bandwidth. Many mobile devices are accessing the Internet through providers who charge by the megabytes of data sent or received. Thus, for mobile sites you need to keep the amount of data to an absolute minimum, else it becomes expensive to visit your mobile websites forcing visitors to stay away when they’re roaming around…

But a favicon, logo and background aren’t always enough. Let’s forget the mobile devices for now and focus on the regular browsers and users who pay a fixed price for their connection. Your website will probably offer some services to customers and you need them to easily recognise what they’re looking at. And these days, more and more people dislike reading descriptions and prefer to see something more graphical. You might consider hieroglyphs on your website but not many people are capable of reading ancient Egyptian. You you need your own set of icons and images for the most important actions on your website. Preferably icons with an extra label next to it.

Take a look at your browser and find the following buttons: Back, Next, Refresh and Home. Did you read some text to find them? Most likely, you found them by looking at the images. Arrows for back and next buttons, an arrow in a circle for the refresh button and a symbol of a house for the home button. Images that have become standard so make sure you have a few of your own to put on your own website. Especially when you want navigation buttons on your own site. However, do keep in mind that you either have to create these images yourself or get a proper license for the images created by someone else. Considering that many icons are already in the public domain or have been created under a Creative Commons license, it should be no big problem to find any for free.

Next, you will probably need images for the products that you want to sell or display. While Irene looks very pretty, I would not use it when I want to sell socks. I would use a picture of socks instead. And make sure I have licensed that picture or created it myself. Preferably, I would create multiple images at different sizes so I can display thumbnails first and a larger version if the user wants to see more details. Again, this would speed up loading your site.

It does create a bit of a challenge, though. Would you resize the image to a thumbnail dynamically or will you store the image as thumbnail and original format? Both have their advantages. Dynamic resizing will allow you to change the thumbnail size when you like and even allows you to create all kinds of custom sizes. However, your server will need more processing power to do the resizing, which is slow if your original images are created at huge resolutions. (Like most of my artwork.) If you’re expecting a lot of visitors, storing images at different sizes would improve performance considerably but will require more disk space, which could be a minor problem when you have your site hosted and have to pay for the storage per megabyte. Then again, hosts don’t charge much for extra disk space these days, if they’re even charging anything at all.

The image of Irene would be practical for dating sites and sites for bathing products. Her hair has a wet look, giving the impression that she just washed it. She also looks very seductive which would certainly attract attention of many men and probably a few women too. However, on dating sites the members would probably recognise her as a professional model and thus consider it a fake image. She’s too pretty to use a dating site. You’d probably scare a few members away if you would use this image. It would still look great for selling shampoo, though.

So, you’re designing a website and thus you will need images to fill it up. This is often the biggest problem for many companies. In many cases, developers will just use Google to find some image and copy it to the project, ignoring the need for any license. They have good reasons to work this way, because adding proper images isn’t a real task for developers. But it could cause legal troubles if the site is published and some photographer recognizes his images. Without a proper license, it could cost you hundreds of euros to correct the situation and that’s without any other legal costs. Thus it is really bad when developers have to search for the proper images themselves.

A better solution would be by creating placeholder images. Provide the developers with some dummy images that you’ve created yourself by adding a textual description to a newly created image at the preferred size. Make sure it has a proper filename too. This placeholder can then be used by the developer to insert in the proper location, allowing him to continue his work while you start to look for a nice image to replace this placeholder. This will allow time to get a proper license or to make it yourself. Once you’re about to publish the site, all you have to do is replace the placeholders with the images that you want to display.

One more, very important thing to remember. When you get a license for any image that you use, make sure that you keep track of the specific details of the license. It would be best if you have your own database where you can store the image with more information about where you’ve licensed the image, where you found the image and the license and the name of the author. You will need this information if the author or some company representing the author finds your image online and thinks you don’t have a proper license.

Of course, there’s a risk of having a fraudulent license. You might have gotten a license from someone pretending to be the author. This is a risk which you might avoid by keeping track of the origins of every image used by your organisation. And yes, it’s a lot of additional bookkeeping. With this information about where you got your license, you will have a good excuse to get away without any financial damages if the license turns out to be fraud. If you can continue to use the image will depend on the local legislation of the country where your organisation is located and the legislation of the country where your website is hosted.

My personal preference for images is to just create it myself. This takes time and I need opportunities to create those images. For CGI artwork, my computer is fast enough to render an image in the background while I continue to work on developing my sites. Still, I am limited to one image per computer at any time and my license for Vue limits me to using the software on just a single computer. Rendering can easily take a few hours, even days, so I have to be patient.

Of course, I could just take one of my digital cameras but that often means that I need a model, a place and the right weather if I’m going to take pictures outside. This is a lot of work for a bunch of images and I will need to do extra work on those photos once I’ve taken them. They need to be cropped, lighting needs to be adjusted, colors need to be enhanced. This is just too much work for a software developer to do. Thus, you’d better hire a professional to do this work if you don’t have someone in your organisation dedicated to this. Do make sure the photographer you hire will do a “Work for hire” so you’re the official author. Otherwise, the photographer will have influence on how you can use the photos he took!

So, organisations will have a complex task of maintaining licenses and their own images. A lot of organisations do tend to forget about these details which can result in costly problems. Make sure your developers will have something to work with while they are developing. Make sure they don’t have to waste time on those images themselves since developers are costly too. They should focus on the code, not the graphics themselves. Make sure someone in your organisation will manage all images and who is responsible for checking anything that’s about to be published for unknown images. If the image isn’t in the system maintained by the image manager, then you should block the publication until this is fixed.

Multithreading, multi-troubling.

Recently, I worked on a small project that needed to make a catalog of image files and folders on my hard disk and save this catalog in a database. Since my CGI and my photography hobby generated a lot of images, it would be practical to have something easy to support it all. Plenty of software that already does something like this, but none that I liked. Especially since I want to connect images to derived images, group them, tag them, share them, assign licenses to them and publish them. And I want to keep track of where I’ve shared them already. Are they on Flickr? CafePress? DeviantArt? Plus, I wanted to know if they should be rated as adult. Some of my CGI artwork is naughty by nature (because nude models are easier to work with) and thus unsuitable for a broad audience.

But for this simple catalog I just wanted to store the image folder, the image filename, an image name that would be the filename without extension and without diacritics, plus the width and height of the image so I could calculate the image ratio. To make it slightly more complex, the folder name would be a relative folder name based on a root folder that’s set in the configuration. This would allow me to move the images to a different folder or use the same database on a different machine without the need to adjust all records.

So, the database structure is simple. One table that has the folders, one table containing image ratios and one for the image names and sizes. The ratio table will help me to group images based on the ratio between width and height. The folder table would do the same for grouping by folder. The Entity Framework would help to connect to this database and take away a lot of my troubles. All I have to do now is write a simple library that would fill and keep up this catalog plus a console application to call those methods. Sounds simple enough.

Within 30 minutes, the first version was ready. I would first enumerate all folders below the source folder, then for each folder in that list I would collect all image files of type PNG, JPG and BMP. The folder would be written to the folder table and the file would be put in the Image table. Just one minor challenge, though…

I want to add the width and height of the image to the image table too, and based on the ratio between width and height, I would have to either add a new ratio record, or change an existing one. And this meant that I had to read every file into memory to find its size and then look if there’s already a ratio record related to it. If not, I would need to add the new ratio record and make sure the next request for ratio records would now include the new ratio record. Plus, I needed to check if the image and folder records also exist in the database, because this tool needs to update only for new images.

The performance was horrible, as could easily be predicted. Especially since I make images and photo’s at high resolutions, so reading those files does take dozens of milliseconds. No matter that my six cores at 3.5 GHz and 32 GB of RAM turns my system in a Speed Demon, these read actions are just slow. And I did it inefficiently since I have six cores but my code is just single-threaded. So, redo from start and this time do it multithreaded.

But multithreading and the Entity Framework don’t go well together. The database connection isn’t threadsafe and thus you cannot access the database methods from multiple threads. Besides, the ratio table could generate collisions when two images with the same, new ratio are processed. Both threads would notice the ratio doesn’t exist thus both would add it. But one of those would then fail because the other would have added it first. So I needed to change my approach.

So I Used ‘Parallel.ForEach’ to walk through the folder list and then again for all files within the folder. I would collect the data in internal lists and when the file loop was done, I would loop through all images and add those that didn’t exist. And yes, that improved performance a lot and kept the conflicts with the ratio table away. Too bad I was still reading all images but that was not a big issue.Performance went up from hours to slightly over one hour. Still slow.

So one more addition. I would first read all existing folders and images from the database and if a file existed in this list, I would not read it’s size anymore since it wasn’t needed. I could skip the image. As a result, it still took an hour the first time I imported all images, but the second run would finish within a minute, since there wasn’t anything left to read or add. The speed was limited to just reading the files and folders from the database and from the disk.

When you’re operating these kinds of projects in an Agile team and you’re scrumming around, things will slow down considerably if you haven’t thought about these challenges before you started the sprint to create the code. Since the first version looks quite simple, you might have planned it as a very short task and thus end up with extremely slow code. In the next sprint you would have to consider options to speed things up and thus you will realize that making it multithreaded is a bigger task. And while you are working on the multithreaded version, you might discover the conflicts with the Entity Framework plus the possible collisions within the tables. So the second sprint might end with a buggy but faster solution with lots of exception handling to catch all possible problems. The third sprint would then fix these, if you manage to find a better solution. Else, this problem might haunt you to the deadline of the project…

And this is where teams have to be real careful. The task sounds very simple, but it’s not. These things are easily underestimated by a team and should be well-planned before you start writing code. Experienced developers will detect these problems before they start, thus knowing that they should take their time and plan carefully without writing code immediately. (I only did it so I could write this post.) The task seems extremely simple and I managed to describe it in the second paragraph of this post with just three lines. But the solution with a high performance will require me to think before I start writing code.

My last approach is the most promising, though. And it can be done by using multithreading but it’s far more complex than you’d assume at first. And it will be memory-hungry because you need to create several lists in memory.

You would have to start with two threads. One thread will read the database and generate lists of files, folders and ratios. These lists must be completely in-memory because if you keep them as queryable lists, the system would try to continuously read them. Besides, once you’re done generating these lists you will want to close the database connection. This all tells you what you already have. The second thread will read all folders and by using parallel threads it would have to read all image files within those folders. But you would not read the image sizes yet, nor calculate all ratios.

When you’re done collecting the data, you will have to compare it all. You would start by comparing the lists of folders. Folders that exist in both lists can be ignored (but not their files.) Folders that exist in the database list but not the disk list should be deleted, including all files within those folders! Folders that are on disk but not in the database need to be added. Thus you can now start two threads, each with their own database connection. One will delete all folders plus their related images from the database that have been deleted while the other adds all new folders that are found on the disk. And by using two database connections, you can speed things up. You will have to wait for both threads to finish, though. But it shouldn’t be slow.

The next step would be the comparison of images. Here you do something similar as with folders. You split the lists in three different lists. One with all images that are unchanged. One with all images that need to be deleted. And one with all images that need to be added. And you would create a separate thread with its own database connection to delete the images so your main process can start working on the ratios table.

Because we now know which images need to be added, we can go through those files using parallel processing, read the image width and height and add this information to the image file records. When we have enriched this list with these sizes, we can use a LINQ query to generate a list of all ratios of those images and removing all duplicate ratios in this list. This generates the list of ratios that we would need to check.

Before we add the new images, we will have to check the ratios table. As with the folders table, we check for all differences. However, we cannot delete ratios that we haven’t found among the images, because we skipped the images that already exist. We will do this later, though. We will first start adding the new ratios to the database. This too can be done in a separate thread but it’s pretty fast anyways so why bother? A performance gain of two seconds isn’t worth the extra effort if a process takes minutes to finish. So add the new ratios.

Once all ratios are added, we can add all images. We could do this using parallel threads, with each thread creating a new database connection and processing all images from one specific folder or with one specific ratio. But if you want to add them multi-threaded I would just recommend to divide the images in groups of similar sizes. Keep the amount of groups relative to the number of processes (e.g. 24 for my six cores) and let the system do its work. By evenly dividing the images over multiple threads, they should all take about the same amount of time.

When adding the new images, you will have to find the related folder and ratio in the database again. This makes adding images slower than adding folders or ratios because you need the extra lookup. This performance would increase if we had kept the Folders and Ratio lists as queryable lists but then we could not open and close the connections, not could we use multiple connections to add those images. And we want multiple connections to speed things up. So we accept a slightly worse performance at this point, although we could probably speed it up a bit by using a stored procedure to add the images. The stored procedure would have parameters for the image name, the image filename, the width and height, the folder name and the ratio width and height. I’m not too fond of procedures with many parameters and I haven’t tested if this would increase the performance, but in theory it should be faster, especially if the database is on a different machine than the application.

And thus a simple task of adding images to a database turns out to be complex, simply because we need better performance. It would still take hours if it has a lot of new images to add but once you have it mostly filled, it will do quite well.

But you will have to ask yourself and your team if you are capable to detect these problems before you start a new sprint. Designs are simple, because designers don’t always keep the performance in mind. These things are easily asked for because they appear very simple, but have a lot of consequences. Similar problems might arise when you work with projects that need to be secure. The design might ask for a login screen with username and password, and optionally a few OpenID providers as alternative logins, but the amount of code to manage all this data and keep it secure is quite complex. These are real moments when you need to design some technical documentation first, which is something people often forget when working on an Agile project.

Still, you cannot blame the developer if the designer just writes a few lines and the developer chooses the first, slow solution. The result would be the requested task. It is the designer who needs to be aware of these possible performance pitfalls. And with Agile, you have a team. All team members should be able to point out that this simple description would have these pitfalls, thus making it a long and complex task. They should all realise that they will have to discuss possible solutions for this and preferably they do so as a team with just one computer. (The computer would be used to find information, not to write code!) Only when they agree on the proper solution then one or two of them could start writing code. And they would know how long this task will take. Thus, the task would finish within two sprints. In the first sprint, all team members would have a small task to meet and discuss the options. In the second sprint, one or more members would have a big task of implementing the code.

Or, to keep it simple: think before you start writing code!

Is XML in decline?

I happen to be one of those older software developers who saw the rise of XML. I even remember the older SGML standard, although I never used SGML. Version 1.0 of XML became an official standard in 1998. Once it became a standard, many companies started working to create the Killer App to work with XML without much of a hassle. And although at first many companies started to create their own XML parsers, not all of them were completely conform the standard. Those parsers disappeared fast enough too.

Right now, version 1.1 of XML is the latest standard. Yes, in 16 years not much has happened to this standard. And the changes that have been applied are more about supporting EBCDIC platforms and the newer Unicode definitions. There are discussions about a version 2.0 but it’s not likely to become a standard soon. Strange as it might sound, XML seems to be in decline if you look at how it’s used.

The power of XML was, of course, in the way how you defined these files and how you could do transformations on these file types. While we used DTD definition files at first to define the structure of an XML file, some smart people came up with the XSD schema format, which allowed more flexibility and is by itself an XML file. Combined with some nice, graphical tools, the XSD made it easier to define an XML file and to validate if an XML file conforms to the proper structure. And I’ve made plenty of XSD files between 2000 and 2010 since my work required a lot of XML data exchanges.

Of course, transformations are also important and here we use stylesheets. An XSLT file would be made in XML itself and define how you would convert an XML file to some other output format. In general, this output would be another XML file, an HTML document to display it in a web browser, a simple text file or even a comma-separated file. And in some special cases it could even create a complete rich text document that you could open in Word. This meant that you could e.g. send an XML file to a server and the server would then process it. It would validate the file with a schema and could do additional validation tools by using a style sheet. If it passed these validation style sheets, other stylesheets could then be used to extract data from the XML and send it to other servers for further processing, while it could also generate documentation to return to the user. You could do a lot of processing with just XML files.

Of course, XML also became popular because more developers started to create web services. And they used the SOAP protocol for this, which is a slightly complex protocol that’s heavily dependant on XML standards. Since SOAP also had some build-in version mechanism, you could always make sure if the client was still using the right SOAP definitions or not. You could even use several SOAP message formats on the same system with only the version number as difference. It wasn’t easy to set up, but it worked extremely well.
And more has been developed to support XML even more. The XPath expressions would allow you to point to specific elements within an XML document. With XQuery, you could execute queries on XML files and process the result. With namespaces you could even combine multiple XML definitions that uses similar entities. And then we have things like XLink, XPointer and XForms, which never have been very popular.

Between 2000 and 2010, it seemed that XML would be a dominating development technique. No more writing code in other programming languages that needed to be compiled, simply because XML happens to be a fast scripting environment. Many platforms started to have a standard for objects that could process XML files and knowledge of XML became a hard-needed requirement for developers. So, what changed?

Well, many developers consider the XML format a bit bulky, especially because tags are often used twice. Once to open the element and once to close it. Thus, if an element is called ‘NumberOfElements‘ then you have to write <NumberOfElements>10</NumberOfElements> and that’s a lot of text to store the number 10. As a result, some developers would then shorten those tag names so the resulting XML would be smaller. If you have 10,000 of these tags in your XML file, shortening it to TOE would save 26 characters per element, thus 260,000 characters in total. This doesn’t seem much but developers feel they gain more by these kinds of optimizations. With modern multi-core processors and systems with 8 or more GB of RAM, such optimizations might make the code half a second faster, which you barely notice with web services, but still… Developers think it saves a lot. And yes, when resources are truly limited, it makes a lot of sense but modern mentalities are that companies will just add a second server if one is too slow. Or more, if need be. This is because the costs of the more hardware is less expensive than the costs of having developers optimize the code even further.

These kinds of optimizations make XML files less human-readable while the purpose was to make this kind of data more readable. It becomes slightly worse when the XML file uses namespaces, since those namespaces are also shortened to just a few letters.

Another problem is the need to parse XML to extract the data. More and more companies are creating web applications that run within web browsers and heavily rely on JavaScript. These apps need to be able to run on multiple devices too. Unfortunately, not all browsers support parsing XML files and even those who do are a bit complex to use. With regular expressions it’s still possible to extract some data from the XML but if you need to fill a grid with 50 rows and 20 columns, things become real complex. And to solve this, developers started to send data to web applications as JavaScript instead of XML. This could then be executed and thus the data would load itself into memory. Since JavaScript objects are less bulky than the begin/end tags of XML elements, it made this new format very practical and thus JSON was born.

The birth of JSON also demanded a change in web services. Since web applications would call these services directly, it would be very clumsy if they have to set up SOAP messages and then parse the SOAP results. A newer, simpler style of web services arose, which uses the REST protocol. Of course, there are many other web service protocols but REST seems to become the new standard. Especially because it’s a simpler protocol that relies on the HTTP(s) protocol.

Of course, web applications have become more important these days because we’re getting more and more devices with all kinds of different operating systems, which all have web browsers. And, as I said, not all of those devices have a native XML parser built-in. They do support JavaScript though, and as a result it becomes quite easy to develop web applications for all devices which use data in JSON formats.

Of course, many devices also allow special platform-dependant apps that can be created with development tools for their specific platforms. For OS X and iOS-based devices you would use Objective C while you would use C++ or Java for Android devices. (Java is the preferred development platform for Android.) For Windows RT you would use .NET for Metro-style applications with either VB or C# as primary language. This makes it a bit difficult to develop software that runs on all three devices but there are several parties who have created compilers that will compile platformdependent executables from platform-independent code. Unfortunately, working with XML parsers still differs on all these platforms and those third-party compilers need to wrap their parsers around the built-in parsers of the underlying platform. That makes them a bit slow.

Since the number of operating systems have risen since the market starts getting more and more new devices, it becomes more difficult to keep a single standard that’s supported by all those systems. And the XML standard is quite complex so the different parsers might not all support the same things. In that regard, JSON is much simpler since these are just simple assignment statements. And these assignment statements are based on the Java syntax, which also happens to be similar to the C++, C# and Objective C syntax. The only difference with these languages is the fact that JSON puts the field names between quotes too, which you can’t do inside these languages.

So, XML is becoming less useful because it requires too much work to use. JSON makes data serialization simpler and is less bulky. Especially when developers are more focussing on web applications and apps for specific devices, the use of XML is in decline in favor of JSON and other solutions. But there’s one more reason why XML is in decline. And this is something within the .NET framework that’s called LINQ.

LINQ was implemented as a separate library for .NET version 3.5 but has become popular since then. Basically, LINK allows you to support data in a structured object and use simple queries to, or to execute transformations on extract data from those objects. This would be similar to XPath and XSLT but now it’s part of your development language, allowing you more choice in functions that you can apply to the data. This is especially important for date fields, since XML doesn’t work well with date formats. LINQ actually makes extracting data from object trees quite easy and can be used on an XML document if you’ve read this document in memory in a proper XDocument or XmlDocument object. Thus, the need for XSLT to transform data has disappeared since you can do the same in C#, VB, F# or Oxygene.

The result is that .NET developers don’t have to learn about XML anymore. Their .NET knowledge combined with LINQ is more than enough. Since .NET also allows serialization to and from XML formats, it’s also quite easy to read and write XML files in .NET. You can import an existing XSD file into your .NET application and have it converted to code, but since most XML data starts as objects that need to be stored in XML before serialization, you will often see that developers just define the objects and include attributes to tell if the object and its fields are elements or attributes, and have the serialization library use these object definitions to serialize it to and from XML. Thus, knowledge of XML schemas is not a requirement anymore.

Because .NET development made the dependency on XML knowledge almost obsolete, the popularity of XML is in decline. It’s still used quite often, but the knowledge that you need to do practical things with XML with XML tools is disappearing. And similar things are happening on other platforms. Java and PHP also started supporting LINQ queries. And, as a result, those environments can work on structured objects instead of XML data. Thus, XML is only needed if the data needs to be sent to some other process and even then, other formats might be chosen too.

In fact, many developers are less concerned about the data format that’s used for inter-process communication. The system is handling this for them and they just use a specific serialization library that does the bulk of the work for them. XML isn’t really declining, but less developers need knowledge about the XML format since development tools have nice wrappers around them that allow these developers to use XML without even realizing they’re using XML. It’s not XML that’s in decline. It’s the knowledge about XML that is in decline…

Motivating developers…

One of the biggest problems for software developers is finding the proper motivations to sit behind the screen for 8 hours per day, designing and developing new code, new projects. It’s generally boring work that requires a lot of mental efforts. And the rewards tend to be just more of the same work the next day, and the day afterwards. Creating new code or fixing existing code is like working in a factory in an assembly line, just placing a lid on a pot which someone else will close, over and over and over.

But developing code is a mental job, unlike adding lids to pots. During physical jobs, your mind can wander around to what you’re going to do in the weekend, what’s on television or whatever else you have on your mind. A mental job makes that very difficult since you can’t think about your last holiday while also thinking about how to solve this bug. And thus developers have a much more complex job than those at the assembly line. A job that causes a lot of mental fatigue. (And sitting so long behind a screen is also a physical challenge.)

Three things will generally motivate people. Three basic things, actually, that humans have in common with most animals. We all like a good night of sleep, we all like to eat good food and we’re all more or less interested in sex. Three things that will apply for almost anyone. Three things that an employer might help with.

First of all, the sleep. Developers can be very busy both at home and at work with their jobs. Many of them have a personal interest in their own job and can spend many hours at home learning, playing or even doing some personal work at their own computers. Thus, a developer might start at 8:30 and work until 17:00. The trip home, dinner and meet and greet with the family will take some time but around 19:30 the developer will be back online on Facebook and other social media, play some online games or study new things. This might go on until well past midnight before they go to bed. Some 6 hours of sleep afterwards, they get up again, have breakfast, read the morning paper and go back to work again.

But a job that is mentally challenging will require more than 6 hours of sleep per day. So you might want to tell your employees to take well care of themselves if you notice they’re up past midnight. You need them well-rested else they’re less productive. Even though those developers might do a great job, they could improve even more if they take those eight hours of sleep every day. And as an employer you can help by allowing employees to visit social sites during work hours since it will help them relax. It lowers the need to check those sites while they’re at home. The distraction of e.g. Facebook might actually even improve their mental skills because it relaxes the mind.

The second motivation is food. Employers should consider providing free lunches to their employees. Preferably sharing meals all together in a meeting room or even a dinner room. Have someone do groceries at the local supermarket to get bread, spread, cheese, butter, milk, soda’s and other drinks and other snacks. While it might seem a waste of the money spent on those groceries, the shared meal will increase moral, allow employees to have all kinds of discussions with one another and increases the team building. It also makes sure everyone will have lunch at the same moment, so they will all be back at work at the same time again.

Developers tend to have lunch between 11:30 and 14:00 and if they have to get their own lunch, it’s not unlikely for them to just go out to the local supermarket themselves or to bring lunch from home. When they go shopping for lunch, they would be unavailable during that time. Of course, lunch time is their own time, but if you need them you don’t want to wait until they’re back from the supermarket. And another problem is that those employees will start storing food at work in their desk or wherever else they can store it. This could attract mice, and I don’t mean computer mice but those live, walking and eating animals.

If an employer provides the lunch and other snacks, this also means there’s a generic storage for food products. This storage is easier to keep up than the desks of developers. Besides, those developers now know their food requirements are satisfied during work hours thus they feel more comfortable.

The third motivation is sex. And here, employers have to be extra careful because this is a very sensitive subject. For example, a developer might spend some time on dating websites or even porn sites. Like social websites, a small distraction often helps during mental processes but a social website might take two minutes to read a post and then respond. A dating website will take way more time to process the profiles of possible dating partners. A porn site will also be distracting for too long and might put the developer in a wrong mood.

The situation at home might also be problematic. An employee might be dealing with a divorce which will impact their sex lives. It also puts them back into the world of dating and thus interfere in their nightlife a bit more. This is a time when they will be less productive, simply because they have too much of their personal lives on their minds. And not much can be done to help them because they need to find a way to stabilize their personal lives again. Do consider sending the employee to a proper counselor for help, though.

Single developers might be a good option, though. They are already dealing with a life of being single and thus will be less distracted by their dates. Still, if they’re young, their status of being single might change and when that happens, it can have impact on their jobs. But the impact might be even an improvement because their partner might actually force them to go to bed sooner, thus fulfilling the sleep motivation.

Married developers who also have children might be the best option since their family lives will require them to live a very regular life. The care for their children will force this regularity. But the well-being of those children might cause the occasional distractions too. For example, when a child gets sick, the developer needs someone to care for the child at home. And they might want to work at home a few days a week to take care of their children.

As an employer, you can’t deal with the sex lives of your employees at work. Those things are private. However, it can be helpful for employees if they can spend more time at home, in a private area, if they have certain needs in this regard. Allowing them to work at home would give them some more options. Since they don’t need to travel to work, they have more time available. If they decide to visit a dating site for half an hour, they could just work half an hour longer and no one would even know about it. If their child is sick, they can take care of them and still work too.

In conclusion, make sure your employees sleep well, give them free lunches and other snacks at the workplace and allow them to work at home for their personal needs. This all will help to make them more productive and allow them to improve themselves.

To Agile/Scrum or not?

The Internet is full buzzwords that are used to make things sound more colorful than they are. Today’s buzzword seems to be “Cloud solutions” and it sounded so new a few years ago that many people applied this term to whatever they’re doing, simply to be part of the new revolutions. Not realizing that the Cloud is nothing more than a subset of websites and web services. And web services are a subset of the thin client/server technologies of over a decade ago. (Cross-breeding Client/Server with the Web will do that.) It’s just how things evolve and once in a while, a new buzzword needs to be created and marketeers are now working on the next buzzword that should make clear the Cloud is obsolete. Simply because new products need to be sold.

Still, the Software Development World hasn’t been quiet either. In the past, a project would be completed through a bunch of steps. It would start with an idea that they would turn into a concept. And this concept would include all requirements for the project.  Designers would then be called to come up with some basic principles and additional planning. When they’re done, they start to implement things, which would include methods to integrate the project into existing products and basically writing all code. It would then be tested and once the tests are satisfying, the whole project could be deployed and the maintenance would start.

If the project had problems in one of these steps, they would often have to go back one step. (Or more, in rare occasions.) This principle is called the “Waterfall model” and it’s drawback is that every step could take weeks to finish. It generally means that you can only update twice per year. Not very popular, these days.

So, new ideas were needed to make it possible to create updates more often. It started with the Agile Manifesto in 2001 and it has become a very popular method these days. Most groups of developers will have heard about it and have started implementing its principles. Well, more or less…

Agile has just four basic rules to keep in mind:

Individuals and interactions over processes and tools.
Working software over comprehensive documentation.
Customer collaboration over contract negotiation.
Responding to change over following a plan.

That’s basically the whole idea. And it sounds so simple since it makes clear what is important in the whole process. Agile focuses a lot on teamwork and tries to keep every team member involved in the whole process. Make sure every member is comfortable with the whole process and basically, talk a lot with one another over the whole process. People tend to forget it, but communication is a key element between people.

Of course, whatever you publish should work, and work well enough so users don’t complain about crashing applications or lost data. You might be missing features that customers would like, but that should not be the main focus of the whole process. Keep it working and keep the customer happy.

Of course, since you’re dealing with customers, you will need to know what they actually want. It’s fine if the CEO decided that the project needs methods X and Y to be implemented but if all customers tell you they want methods A or B implemented, then either the CEO has to change his mind or the company should start looking for a new CEO.

And keep in minds that things change, and sometimes change real fast. It’s hard to predict what next year will bring us, even online. Development systems get new updates, new plug-ins and new possibilities and you need to keep up to be able to get the most out of the tools available.

So, where do things go wrong?

Well, companies tend to violate these principles quite easily. And I’ve seen enough projects fail because of this, causing major damage or even bankrupt companies simply because the company failed at Agile. Failure can be devastating with Agile, since you’re developing at high speeds. And we all know, the faster you go, the harder you can fall…

Most problems with Agile starts with management. Especially the older managers tend to live in the past or don’t understand the whole process. Many Scrum Sprints are disrupted because management needs one or more developers from that sprint for some other task. I’ve seen sprints being disrupted because a main programmer was also responsible for maintaining a couple of web servers and during the sprint, one of those servers broke down. Since fixing it had priority, his tasks for that sprint could not be finished in time and unfortunately, other tasks depended on this task being ready.

Of course, the solution would be that another team member took over this task, but it did not fit the process that the company had set up. This task was for a major component that was under control by just one developer. Thus, he could not be replaced because it disturbed the process. (Because another developer might have slightly different ideas about doing some implementations.)

Fortunately, this only meant a delay of a few weeks and we had plenty of time before we needed to publish the new product. We’d just have to hurry a bit more…

Agile also tends to fail when teams don’t work well together. Another company had several teams all working on the same project. And unfortunately, the project wasn’t nicely divided in pieces so each team had its own part. No, all teams worked on all the code, all the pieces. And this, of course, spells trouble.

When you have multiple teams working on the same code, you will often need an extra step of merging code. This is not a problem is one team worked on part A and the other on part B. It does become a problem when both teams worked on part C and they wrote code that overlaps one another. Things will go fine when you test just the code of one team but after the merge, you need to test it all over again, thus the whole process gets delayed by one more sprint just to test the merged code. And it still leaves a lot of chances for including bugs that will be ignored during testing. Especially manual testing, when the tester has tested process X a dozen of times already for both teams and now has to test it again for the merged code. They might decide to just skip it, since they’ve seen it work dozens of times before so what could go wrong?

As it turns out, each team would do its own merging of the code with the main branch. Then they would build the main branch and tell the testers. Thus, while testers would be busy to test the main branch that team 1 provided, team 2 is also merging and will tell them again, a few days later. The result is basically that all tests have to be done over again so days of testing wasted. Team 3 would follow after this, thus again wasting days of testing. Team one then decides to include a small bugfix and again, testing will have to start from the beginning, all over again.

With automated testing, this is not a problem. You would have thousands of tests that should pass and after the update to the main branch, those tests would start running from begin to end. Computers don’t complain. However, some tests are done manually and the people who execute those tests will be really annoyed if they have to do the same test over and over with every new build. It would be better if they’d just try to automate their manual tests but that doesn’t always happen. So, occasionally they decide that they’ve tested part X often enough and it never failed so why should it fail the next time?

Well, because team 1 and team 2 wrote code that conflicts with one another and that code is in part X. The testers skip it, thus the customer will notice the bug. Painful!…

There are, of course, more problems. I’ve seen a small company that had a nice, exclusive contract with a very big company. Lets call them company Small and company Big. Company Small had created a product that company Big really liked so they asked for an exclusive version of it, with features that company Big would choose. And this would be a contract that would be worth tens of millions for company Small and its ten employees.

And things would have gone fine if company Small had not decided to continue working on its own products and just focused on delivering what company Big wanted, and to deliver in time. But no, other things were more important and the customer would just get what company Small made, with some minor adjustments. And the CEO was quite happy with this progress. That is, until the customer noticed that they did not hear his wishes. All company Big was supposed to do was sign the contract and pay the bill. And once things were done, they would just have to accept what was given to them. So company Big found another company willing to do the same project and just dumped company Small. End of contract and thus end of income, since company Small just worked exclusively for the bigger company. And within five months, company Small went tits-up, bankrupt. Why? Because they did not listen to the customer, they did not keep them happy.

And another problem is the fact that companies respond very slowly on changes. I’ve worked for companies that used development tools that were 5 years old, simply because they did not want to upgrade. I still see the occasional job offering where companies ask for developers skilled with Visual Studio 2008 while there are three newer versions available already. (Versions 2010, 2012 and 2013.) In 2003 I was still working on a 16-bit project that was meant to be used by Windows 3.1 and up, simply because one single user still used an old Windows 3.11 system. At least, we thought they did because no one ever asked them if they’ve upgraded. And that customer never told us that they had indeed upgraded and didn’t think of asking for a 32-bit version…

I’ve seen management hang on to a certain solution even though there’s plenty of evidence that newer options are available. I’ve developed software on 32-bit systems with 2 GB of memory when 64-bit systems were available and had up to 8 GB of memory, plus more speed. I had to use a single-monitor system on a PC that had options for multiple monitors plus we had extra monitors available, but management considered it a waste. The world is changing and many systems now easily support two or more monitors but some companies don’t want to follow.

So, what is Agile anyways? It’s a method to quickly respond to changes and desires of customers with a well-informed team that feels committed to the task and to deliver something the customer wants. (And customers want something they can use and which works…)

Would there be a reason not to use Agile? Actually, yes. It’s not a silver bullet or golden axe that you can use to solve anything. It’s a mindset that everyone in the team should follow. One single member in the team can disrupt the whole process. One manager who is still used to “the old ways” can devastate whole sprints. When Agile fails, it can fail quite hard. And if you lack the reserves, failure at Agile can break your company.

Agile also works better for larger projects, with reasonable big teams. A small project with one team of three members is actually too small to fully implement the Agile way of working, although it can use some parts of it. Such a small team tends to make planning a bit more difficult, especially if team members aren’t always available for the daily scrum meetings. When you’re that small, it’s just better to meet when everyone is available and discuss the next steps. No clear deadlines, since the planning is too complex. What matters is that goals are set and an estimation is made when it is finished. Whenever the team meets, they can then decide if the estimation is still correct or if it needs to be adjusted.

Another problem can be the specialists that are part of the team. Say, for example, that you have a PHP project that needs to communicate with a mainframe and some code written in COBOL. The team might have hundreds of PGP developers but chances are that none of them know anything about COBOL. So you need to have a COBOL specialist. And basically he alone would carry the tasks of maintaining the mainframe side of the project. You can make him part of the Scrum meetings but since he has to do his part all by himself, he doesn’t have much use for the other team members. So again, just decide on a specific goal and estimate when it should be finished. Get regular updates to allow adjustments and let the COBOL developer do his work.

The specialist can become even more troublesome if you have to interact with a project that another company is creating. If you do things correctly, you and the other company would discuss a generic interface for the interaction between both projects. You would then both build a stub for the other company to use for testing. This stub just has to offer some dummy information, but it should be usable.

When both companies have the stubs they need, they can each work on their part. They will have to keep each other informed if some parts of the interface need to be changed or if some rules are changed about the data that can be provided. Preferably, this is done by providing a new stub. Both teams will have just one goal, which is providing all the required methods that are part of the stubs. And when parts are fully implemented, they can offer the other company with new stubs that contain some working parts already.

Still, when two companies have to work together this way, they have to think small. Don’t create a stub with thousands of methods for all the things you want to add during the next 5 years. Start small. Just add things to the stub that you want to finish for the next sprint. Repeat adding things per sprint and communicate with the other company about what they’re going to add next. You don’t have to work on the same method of the stubs anyways. One company might start working on the GUI part that allows users to enter name, address and phone number while the other works on storing employment data and import/export management. The stubs should just give dummy methods for those parts that aren’t implemented yet. Each company should develop the parts that they consider the most important, although both should be aware that everything is finished only if all stub methods are implemented.

Agile is just a mindset. If used properly, it can be very powerful. However, do keep in mind that not all of Agile might be practical for your own situation. Agile requires a lot of time for meetings with developers, with customers and with management. Everyone needs to be involved and everyone needs to be available for those meetings. Scrum becomes more difficult if not all team workers are available on all five workdays of the week. And worse of all,, team members will have to prepare for the meetings. Even for the daily meetings since they have to keep track of their own progress.

Do not fear to just implement part of the whole Agile/Scrum principle. It is made to hybridise with other methods. Use the methods, don’t let the method force itself upon you.