Adventures in 3D.

I remember when Blender became first available for me. It was a 3D rendering engine and it looked fun, so I downloaded it, installed it and tried it. This was somewhere around 1999 and I still had a lot to learn back then. Still, I did not like the user interface of Blender (and still don’t) and I considered it too complex and not useful enough for myself so I soon forgot about it again. I still was interested in rendering 3D images, but I also wanted something simpler.

So, around 2004 I purchased a copy of Poser and it had the user-friendliness that I was looking for. I needed to collect all kinds of models, though. But by using models I could create some interesting images and could use my own CGI artwork instead of my own photographs for the software development that I like to do.

Being able to generate your own artwork for your applications is a better option than depend on stock material or purchasing/hiring others to make it for you. I don’t want to violate copyrights of others, but when you create websites, you need some graphical parts too and I needed to be my supplier of these images. Buttons were easy, since Paint Shop Pro and other 2D software had plenty of functionality to create them. But more complex things like showing a person behind a computer either required taking pictures or rendering a 3D model. Poser made the second option available to me.

When Second Life became hot, I also played a bit with that. Here is a 3D environment where you can build 3D objects simply by combining several basic shapes, or prims. (From primitives.) The game gave me a more comfortable feel around 3D environments and made me wanting even more.

And now its 2014. I have a piece of land in Second Life where I can build all kinds of things. I use the Firestorm viewer which allows me to exports my own objects from Second Life to use in other 3D software and from there I can continue to change them even further. Second Life also allows me to import back those objects I’ve exported and modified and allows me to import other objects from 3D software, although it does have a lot of problems with many of those models. Unfortunately, Second Life isn’t very clear when it reports errors and doesn’t seem to be able to simply fix some problems during import.

But in all this time, I’ve gotten a nice collection of 3D software which I will mention now, including where you can find it and what I think about it. All software I have are used on Windows systems.

Blender

Blender is a very popular product but I consider the user interface a bit complex. Too many buttons and options are polluting the screen and make it difficult to understand. To make things worse, it’s user interface behaves different from standard Windows user interfaces. Dialog boxes tend to appear anywhere, with plenty of different options instead of Yes/No or Ok/Cancel. Information is visible all over the screen so you have to look everywhere to find it. It’s just not intuitive, which is probably because this is an open-source collaboration between many developers who each left their own marks on the application.

Personally, I think the Blender user-interface needs a complete rewrite…

POV-Ray

POV-Ray is another 3D render engine and even older than Blender. POV-Ray uses scripts instead of a 3D graphical environment thus it’s not easy to use if you want to generate some 3D model. You just have to write each line in code for this software. Fortunately, there are plenty of 3D modelling applications that you can use to generate POV-Ray scripts. One of them is:

AC3D

AC3D is a commercial product that makes 3D modelling quite easy. Not as easy as Poser or Second Life, but it has plenty of good features. It’s user interface could use some sanitation, though. On my dual-monitor setup, some of the dialog boxes tend to pop up on the wrong monitor. But it’s very practical and supports several 3D image formats. For all others, you might want something that’s able to convert many different formats. Something like the Online 3D Model Converter or an application like:

AccuTrans 3D

AccuTrans 3D supports a few 3D image formats, allowing you to convert your models between different applications. This software also allows you to make some simple modifications to your models and I’ve used it to convert my Poser models to a format that Second Life understands. During this conversion, I also merge the parts of my models that all use the same texture, thus making the models simpler within Second Life. Of course, there’s an alternative that’s free:

MeshLab

MeshLab is open-source, but it has a clear user interface. It still has a few flaws, though. For example, it’s a bit slow compared to AccuTrans 3D. And it fails to correctly import some of my models correctly. It also fails to generate an export file that Second Life can read correctly, thus I need AccuTrans 3D to create those. (And even then Second Life tends to have problems importing them.)

Still, MeshLab is useful and allows you to make several changes to your models. But if you want to put models in proper poses, you will need:

Poser and Poser Pro

Poser is my favorite tool to create 3D models to use within my software. If I need a model of a person behind a computer, I can make it within 20 minutes with Poser. Just take a model of a person, add clothing models and a computer model, perhaps a desk and chair model and start rendering. It is very easy to use and it can import models created by other applications, although those will be less flexible than regular Poser models.

Another application that can be used with Poser models is:

DAZ Studio

DAZ Studio is free, thus making it very popular. It uses the same models as Poser does and DAZ also sells those models! Thus, DAZ has become very popular as supplier of Poser models.

But maybe it’s because I’m too used to Poser already, but I don’t like the user interface of DAZ Studio. To make it worse, I’ve tried to open some of my Poser models with DAZ Studio, only to discover that DAZ Studio did not accept many of the changes I’ve made to the models. Body parts were reset to their default shapes and it just did not look right.

Still, if you use Poser or DAZ Studio to render some new images, you’ll often want to have some interesting background too. Indoors settings aren’t much of a problem but outdoor images would need a more complex environment. One solution would be:

Bryce Pro

Bryce can make some great environments, although it seems to be missing some functionality. It also looks very small on my screen with a resolution of 1920×1200. While the results look very good, the user interface is less practical than the alternative:

E-on Vue

I use Vue a lot to render models that I’ve created with Poser. The reason for this is because Vue generates very good environments while Poser creates fine models. I could use Poser to render those models, but the lack of a good environment makes them look a bit boring.

Still, one problem with Vue is that it cannot export my generated environments for use in other software. Although Vue does have an export option, it also has many models that are not allowed to be exported. Thus you can create a nice sea, with boats and an island, and try to export it only to discover that you can export just one tiny rock from the whole scene. Vue is also quite expensive, compared to Bryce.

There is far more 3D software available, for all kinds of purposes. DAZ Studio also has Hexagon and Carrara:

Hexagon

Hexagon is just another tool to create models in 3D. I like to use it and have created a few things with it, but it tends to crash a lot. It’s not reliable enough for big projects because it can unexpectedly crash while you’re working on some project. While it is very user-friendly, the instability is just annoying.

Carrara

Carrara is similar to DAZ Studio and Poser, since it’s meant to put models in certain poses. But it combines this with landscape modelling, making it more useful. It has a simple interface, making it very practical to use. Less is more. Well, at least for user interfaces. Users tend to get lost in very busy interfaces.

Carrara can use Poser models and more. It can import templates I’ve created based on Poser models, although it doesn’t always succeed at importing Poser scenes. It can export to a format that Second Life should be able to read, but this too has some incompatibilities. Second Life is just too picky.

Second Life

It’s easy to forget but Second Life itself is also very capable of building 3D images. And it seems to be very user-friendly at this too, since it does so in an interactive way with the user. You have an avatar that can walk or fly around the object, which helps you to create models at a nice scale. It supports several primary shapes that can be used to build more complex items. It also allows great control over textures on your objects.

However, to build objects in Second Life, you need some land where you can build. This happens to be limited to certain areas, unless you yourself own some land. You also have to pay small amounts to upload images to the Second Life environment, making it costly in usage. So, there is an alternative:

OpenSimulator

The OpenSimulator is an alternative for Second Life. It’s open-source, thus free. But it can be used by the same viewers that are used for Second Life. It is a bit complex to set up your own simulations and OpenSimulator itself lacks a useful graphical interface. For this, you need a special viewer:

FireStorm

FireStorm happens to be a great viewer for both Second Life and OpenSimulator. While Second Life has its own viewer, FireStorm has some more advanced features and can be used for OpenSimulator. You can use it to build objects within Second Life or OpenSimulator and then export these for usage in other 3D software. Thus you could use Second Life to make a building or fortress and export it and use it in Poser with some models around it.

There are more viewers available for Second Life and OpenSimulator, but I would recommend to use Firestorm.

VastPark

One more simulator. Unlike Second Life, VastPark seems to focus more on businesses who want to make more interactive presentations. And what better to use for this than a virtual environment.

But like OpenSimulator, you can’t really use this without first generating the virtual environment. This takes time and some skills with 3D images. You need to create models and create textures for those models, else it’s just a lot of white on white…

VastPark could also be used to create complex animations by recording the actions within the virtual world. This would be useful for creating training material or support documentation of special events, like car accidents or office fires.

LightWave

I haven’t used LightWave but it looks quite nice. However, I use the LightWave file format as export format for Poser. I then convert those with AccuTrans 3D to the Collada file format, which Second Life can import. The only problem is that Poser models are extremely detailed because they are used to generate highly detailed images. Second Life can’t really handle that much details and often fails to import these models. I can use AccuTrans 3D to split the Poser model in several parts and import those parts one by one, which seems to have a better effect. However, the models that you will import this way in Second Life eat away a lot of your land usage, thus you need a large piece of land. Or your own simulation!

FreeCAD

FreeCAD is just another modelling tool. It has some good examples but it lacks some practical functions. However, missing functionality can be added through plug-ins. It is a good tool to combine with POV-Ray. It can do a lot based on the design mode that you’ve selected.

DeleD

DeleD is another modeller, which is more used for game development. It is useful for simpler objects, not Poser models. It works a bit like Second Life, where you select cubes, spheres and other primitives to build more complex objects.

Speaking of game development, there are also libraries for developers that can help them to create their own 3D software. For example:

Horde 3D

This is an open source 3D rendering engine, to be used in games and 3D applications. It has been created for speed, thus less practical if you want to generate highly detailed images. But in a game, you want animations, and you want them in real-time, running smoothly.

Ogre 3D

Ogre 3D is another 3D rendering engine, written in C++ and with wrappers for use with Python, C# and Java. It too is great to use with games and other interactive environments. It also supports Linux, iOS, Android, WinRT and the Mac OS X. Basically, it’s a library around the OpenGL specifications.

OpenGL

OpenGL isn’t really an application but today, it is part of almost every computer that has a graphics card. The Khronos Group is responsible for maintaining this standard, thus every graphics card can be used by the OpenGL protocol. (At least, if the manufacturer added the support for OpenGL.) Most 3D software relies on OpenGL to display its graphics, although there are plenty of games that use DirectX instead. However, DirectX is an API created by Microsoft to be used for Windows applications only. Thus, many developers are focusing more on OpenGL while Microsoft seems to try to push them back to DirectX.

Oculus VR

The greatest dream of 3D will be the Oculus Rift, a special piece of hardware that’s supposed to give you a 3D virtual environment. Basically, it’s made of two screens, each of them showing you a scene from a slightly different angle. Since each eye will only see one screen, your brain will see the virtual world in 3D. (Unless you’re a cyclops.) It will respond on the movements of your head and development for this device will ask a lot from future developers. The 3D worlds are arriving for consumers and companies. It’s still mostly eye-candy to have nice, 3D environments. Development for such 3D worlds is more complex than having a simple web page with text on it. It will need to conquer its place in this world.

However, there’s also development done on 3D televisions and monitors that would not require special glasses to view its content. If such a device would hit the market, then 3D development would become even more important…

So, developers… Prepare to go 3D!

Great photography, licensed or self-made…

The Internet has become extremely important in our daily lives. And more importantly, the Internet requires many developers to think more graphically. Twenty-five years ago, computers were mostly text-based with some little graphics. The Internet was about to be born and graphics was mostly restricted to small icons and images with a limited amount of colors. When you were lucky, your graphics card would be a VGA card, able to handle images with 256 colors at resolutions of 640×480 pixels. A need for a graphic standard was required back then and a few new formats were born.

The PCX format, created by the now-defunct Zsoft Corporation, turned out reasonable successful because it supported up to 256 colors with an extra color palette that allowed the selection of 256 colors from any of the true-color images. It also supported data compression, making it reasonable small. Yet the decompression method was pretty fast, thus the processor would not need to work hard to display the image.

The PCX format has extended to true-color more recently but the JPG format turned out to be a better format. Since processors started to improve their performance, the more complex compression of the JPG format was fast enough to use and resulted in smaller files, although the images would lose some details.

Another popular format was the GIF format, that allowed images with 255 colors plus a transparent layer. (Or 256 colors without transparency.) This format is still popular since it’s great for logos and cartoons and it allows animations. And the compression of GIF files would reduce the image considerably in size without losing any details.

The PNG format has become more popular and was created as successor of the GIF format. It was needed because modern graphics required more colors and there was a demand for a better transparency layer. The PNG format uses 24-bits or 48-bits for its colors allowing more colors than the human eye can detect, plus an alpha channel (24-bits only) allowing images to define the transparency level of each pixel to be anything between transparent and opaque. This was great to e.g. create dirty glass windows or thin, silk nightgowns as graphics.

There are, of course, many other graphic formats but I want to talk about art, not formats. And this time, I want to talk about Pavel Kiselev, also known as photoport (NSFW), who likes to create glamorous pictures of pretty women. Today, he posted this picture of Irene, of one of his models. (I’ve licensed it for personal use, and this is my personal blog so it should be okay.)

IreneAnd this is the kind of photography that I love to see. Should I say more?

Well, okay… I do have to keep in mind that I wanted to relate this to software development so I should not distract myself by continuously looking in those pretty eyes. :-) So, back to the software development part…

When you’re designing websites, you have to keep in mind that you will need a lot of graphics. Something simple like an icon to display in the browser is already a requirement these days, else people have some trouble finding your site among their favorites. They can, of course, read the labels in the menu but most people will glance over all icons first and clicking on the icon that they recognise as your icon. Without the icon, they have more trouble finding you so never forget to add a Favicon to your site! Something that people will easily recognize as your brand.

Next, your site will need a logo and a background image. Or at least a logo. The best logos are PNG or GIF images, because they are small and allow transparency. The image of Irene would be bad as logo since it’s big and has a lot of bytes. When people visit your site with a slow internet connection, it would just look bad if the logo takes too long to download. Thus, keep it small yet detailed enough to be recognisable.

The background image might be bigger, unless you’re designing websites for mobile devices. For mobile devices, no background image would be better since it will take less bandwidth. Many mobile devices are accessing the Internet through providers who charge by the megabytes of data sent or received. Thus, for mobile sites you need to keep the amount of data to an absolute minimum, else it becomes expensive to visit your mobile websites forcing visitors to stay away when they’re roaming around…

But a favicon, logo and background aren’t always enough. Let’s forget the mobile devices for now and focus on the regular browsers and users who pay a fixed price for their connection. Your website will probably offer some services to customers and you need them to easily recognise what they’re looking at. And these days, more and more people dislike reading descriptions and prefer to see something more graphical. You might consider hieroglyphs on your website but not many people are capable of reading ancient Egyptian. You you need your own set of icons and images for the most important actions on your website. Preferably icons with an extra label next to it.

Take a look at your browser and find the following buttons: Back, Next, Refresh and Home. Did you read some text to find them? Most likely, you found them by looking at the images. Arrows for back and next buttons, an arrow in a circle for the refresh button and a symbol of a house for the home button. Images that have become standard so make sure you have a few of your own to put on your own website. Especially when you want navigation buttons on your own site. However, do keep in mind that you either have to create these images yourself or get a proper license for the images created by someone else. Considering that many icons are already in the public domain or have been created under a Creative Commons license, it should be no big problem to find any for free.

Next, you will probably need images for the products that you want to sell or display. While Irene looks very pretty, I would not use it when I want to sell socks. I would use a picture of socks instead. And make sure I have licensed that picture or created it myself. Preferably, I would create multiple images at different sizes so I can display thumbnails first and a larger version if the user wants to see more details. Again, this would speed up loading your site.

It does create a bit of a challenge, though. Would you resize the image to a thumbnail dynamically or will you store the image as thumbnail and original format? Both have their advantages. Dynamic resizing will allow you to change the thumbnail size when you like and even allows you to create all kinds of custom sizes. However, your server will need more processing power to do the resizing, which is slow if your original images are created at huge resolutions. (Like most of my artwork.) If you’re expecting a lot of visitors, storing images at different sizes would improve performance considerably but will require more disk space, which could be a minor problem when you have your site hosted and have to pay for the storage per megabyte. Then again, hosts don’t charge much for extra disk space these days, if they’re even charging anything at all.

The image of Irene would be practical for dating sites and sites for bathing products. Her hair has a wet look, giving the impression that she just washed it. She also looks very seductive which would certainly attract attention of many men and probably a few women too. However, on dating sites the members would probably recognise her as a professional model and thus consider it a fake image. She’s too pretty to use a dating site. You’d probably scare a few members away if you would use this image. It would still look great for selling shampoo, though.

So, you’re designing a website and thus you will need images to fill it up. This is often the biggest problem for many companies. In many cases, developers will just use Google to find some image and copy it to the project, ignoring the need for any license. They have good reasons to work this way, because adding proper images isn’t a real task for developers. But it could cause legal troubles if the site is published and some photographer recognizes his images. Without a proper license, it could cost you hundreds of euros to correct the situation and that’s without any other legal costs. Thus it is really bad when developers have to search for the proper images themselves.

A better solution would be by creating placeholder images. Provide the developers with some dummy images that you’ve created yourself by adding a textual description to a newly created image at the preferred size. Make sure it has a proper filename too. This placeholder can then be used by the developer to insert in the proper location, allowing him to continue his work while you start to look for a nice image to replace this placeholder. This will allow time to get a proper license or to make it yourself. Once you’re about to publish the site, all you have to do is replace the placeholders with the images that you want to display.

One more, very important thing to remember. When you get a license for any image that you use, make sure that you keep track of the specific details of the license. It would be best if you have your own database where you can store the image with more information about where you’ve licensed the image, where you found the image and the license and the name of the author. You will need this information if the author or some company representing the author finds your image online and thinks you don’t have a proper license.

Of course, there’s a risk of having a fraudulent license. You might have gotten a license from someone pretending to be the author. This is a risk which you might avoid by keeping track of the origins of every image used by your organisation. And yes, it’s a lot of additional bookkeeping. With this information about where you got your license, you will have a good excuse to get away without any financial damages if the license turns out to be fraud. If you can continue to use the image will depend on the local legislation of the country where your organisation is located and the legislation of the country where your website is hosted.

My personal preference for images is to just create it myself. This takes time and I need opportunities to create those images. For CGI artwork, my computer is fast enough to render an image in the background while I continue to work on developing my sites. Still, I am limited to one image per computer at any time and my license for Vue limits me to using the software on just a single computer. Rendering can easily take a few hours, even days, so I have to be patient.

Of course, I could just take one of my digital cameras but that often means that I need a model, a place and the right weather if I’m going to take pictures outside. This is a lot of work for a bunch of images and I will need to do extra work on those photos once I’ve taken them. They need to be cropped, lighting needs to be adjusted, colors need to be enhanced. This is just too much work for a software developer to do. Thus, you’d better hire a professional to do this work if you don’t have someone in your organisation dedicated to this. Do make sure the photographer you hire will do a “Work for hire” so you’re the official author. Otherwise, the photographer will have influence on how you can use the photos he took!

So, organisations will have a complex task of maintaining licenses and their own images. A lot of organisations do tend to forget about these details which can result in costly problems. Make sure your developers will have something to work with while they are developing. Make sure they don’t have to waste time on those images themselves since developers are costly too. They should focus on the code, not the graphics themselves. Make sure someone in your organisation will manage all images and who is responsible for checking anything that’s about to be published for unknown images. If the image isn’t in the system maintained by the image manager, then you should block the publication until this is fixed.

Multithreading, multi-troubling.

Recently, I worked on a small project that needed to make a catalog of image files and folders on my hard disk and save this catalog in a database. Since my CGI and my photography hobby generated a lot of images, it would be practical to have something easy to support it all. Plenty of software that already does something like this, but none that I liked. Especially since I want to connect images to derived images, group them, tag them, share them, assign licenses to them and publish them. And I want to keep track of where I’ve shared them already. Are they on Flickr? CafePress? DeviantArt? Plus, I wanted to know if they should be rated as adult. Some of my CGI artwork is naughty by nature (because nude models are easier to work with) and thus unsuitable for a broad audience.

But for this simple catalog I just wanted to store the image folder, the image filename, an image name that would be the filename without extension and without diacritics, plus the width and height of the image so I could calculate the image ratio. To make it slightly more complex, the folder name would be a relative folder name based on a root folder that’s set in the configuration. This would allow me to move the images to a different folder or use the same database on a different machine without the need to adjust all records.

So, the database structure is simple. One table that has the folders, one table containing image ratios and one for the image names and sizes. The ratio table will help me to group images based on the ratio between width and height. The folder table would do the same for grouping by folder. The Entity Framework would help to connect to this database and take away a lot of my troubles. All I have to do now is write a simple library that would fill and keep up this catalog plus a console application to call those methods. Sounds simple enough.

Within 30 minutes, the first version was ready. I would first enumerate all folders below the source folder, then for each folder in that list I would collect all image files of type PNG, JPG and BMP. The folder would be written to the folder table and the file would be put in the Image table. Just one minor challenge, though…

I want to add the width and height of the image to the image table too, and based on the ratio between width and height, I would have to either add a new ratio record, or change an existing one. And this meant that I had to read every file into memory to find its size and then look if there’s already a ratio record related to it. If not, I would need to add the new ratio record and make sure the next request for ratio records would now include the new ratio record. Plus, I needed to check if the image and folder records also exist in the database, because this tool needs to update only for new images.

The performance was horrible, as could easily be predicted. Especially since I make images and photo’s at high resolutions, so reading those files does take dozens of milliseconds. No matter that my six cores at 3.5 GHz and 32 GB of RAM turns my system in a Speed Demon, these read actions are just slow. And I did it inefficiently since I have six cores but my code is just single-threaded. So, redo from start and this time do it multithreaded.

But multithreading and the Entity Framework don’t go well together. The database connection isn’t threadsafe and thus you cannot access the database methods from multiple threads. Besides, the ratio table could generate collisions when two images with the same, new ratio are processed. Both threads would notice the ratio doesn’t exist thus both would add it. But one of those would then fail because the other would have added it first. So I needed to change my approach.

So I Used ‘Parallel.ForEach’ to walk through the folder list and then again for all files within the folder. I would collect the data in internal lists and when the file loop was done, I would loop through all images and add those that didn’t exist. And yes, that improved performance a lot and kept the conflicts with the ratio table away. Too bad I was still reading all images but that was not a big issue.Performance went up from hours to slightly over one hour. Still slow.

So one more addition. I would first read all existing folders and images from the database and if a file existed in this list, I would not read it’s size anymore since it wasn’t needed. I could skip the image. As a result, it still took an hour the first time I imported all images, but the second run would finish within a minute, since there wasn’t anything left to read or add. The speed was limited to just reading the files and folders from the database and from the disk.

When you’re operating these kinds of projects in an Agile team and you’re scrumming around, things will slow down considerably if you haven’t thought about these challenges before you started the sprint to create the code. Since the first version looks quite simple, you might have planned it as a very short task and thus end up with extremely slow code. In the next sprint you would have to consider options to speed things up and thus you will realize that making it multithreaded is a bigger task. And while you are working on the multithreaded version, you might discover the conflicts with the Entity Framework plus the possible collisions within the tables. So the second sprint might end with a buggy but faster solution with lots of exception handling to catch all possible problems. The third sprint would then fix these, if you manage to find a better solution. Else, this problem might haunt you to the deadline of the project…

And this is where teams have to be real careful. The task sounds very simple, but it’s not. These things are easily underestimated by a team and should be well-planned before you start writing code. Experienced developers will detect these problems before they start, thus knowing that they should take their time and plan carefully without writing code immediately. (I only did it so I could write this post.) The task seems extremely simple and I managed to describe it in the second paragraph of this post with just three lines. But the solution with a high performance will require me to think before I start writing code.

My last approach is the most promising, though. And it can be done by using multithreading but it’s far more complex than you’d assume at first. And it will be memory-hungry because you need to create several lists in memory.

You would have to start with two threads. One thread will read the database and generate lists of files, folders and ratios. These lists must be completely in-memory because if you keep them as queryable lists, the system would try to continuously read them. Besides, once you’re done generating these lists you will want to close the database connection. This all tells you what you already have. The second thread will read all folders and by using parallel threads it would have to read all image files within those folders. But you would not read the image sizes yet, nor calculate all ratios.

When you’re done collecting the data, you will have to compare it all. You would start by comparing the lists of folders. Folders that exist in both lists can be ignored (but not their files.) Folders that exist in the database list but not the disk list should be deleted, including all files within those folders! Folders that are on disk but not in the database need to be added. Thus you can now start two threads, each with their own database connection. One will delete all folders plus their related images from the database that have been deleted while the other adds all new folders that are found on the disk. And by using two database connections, you can speed things up. You will have to wait for both threads to finish, though. But it shouldn’t be slow.

The next step would be the comparison of images. Here you do something similar as with folders. You split the lists in three different lists. One with all images that are unchanged. One with all images that need to be deleted. And one with all images that need to be added. And you would create a separate thread with its own database connection to delete the images so your main process can start working on the ratios table.

Because we now know which images need to be added, we can go through those files using parallel processing, read the image width and height and add this information to the image file records. When we have enriched this list with these sizes, we can use a LINQ query to generate a list of all ratios of those images and removing all duplicate ratios in this list. This generates the list of ratios that we would need to check.

Before we add the new images, we will have to check the ratios table. As with the folders table, we check for all differences. However, we cannot delete ratios that we haven’t found among the images, because we skipped the images that already exist. We will do this later, though. We will first start adding the new ratios to the database. This too can be done in a separate thread but it’s pretty fast anyways so why bother? A performance gain of two seconds isn’t worth the extra effort if a process takes minutes to finish. So add the new ratios.

Once all ratios are added, we can add all images. We could do this using parallel threads, with each thread creating a new database connection and processing all images from one specific folder or with one specific ratio. But if you want to add them multi-threaded I would just recommend to divide the images in groups of similar sizes. Keep the amount of groups relative to the number of processes (e.g. 24 for my six cores) and let the system do its work. By evenly dividing the images over multiple threads, they should all take about the same amount of time.

When adding the new images, you will have to find the related folder and ratio in the database again. This makes adding images slower than adding folders or ratios because you need the extra lookup. This performance would increase if we had kept the Folders and Ratio lists as queryable lists but then we could not open and close the connections, not could we use multiple connections to add those images. And we want multiple connections to speed things up. So we accept a slightly worse performance at this point, although we could probably speed it up a bit by using a stored procedure to add the images. The stored procedure would have parameters for the image name, the image filename, the width and height, the folder name and the ratio width and height. I’m not too fond of procedures with many parameters and I haven’t tested if this would increase the performance, but in theory it should be faster, especially if the database is on a different machine than the application.

And thus a simple task of adding images to a database turns out to be complex, simply because we need better performance. It would still take hours if it has a lot of new images to add but once you have it mostly filled, it will do quite well.

But you will have to ask yourself and your team if you are capable to detect these problems before you start a new sprint. Designs are simple, because designers don’t always keep the performance in mind. These things are easily asked for because they appear very simple, but have a lot of consequences. Similar problems might arise when you work with projects that need to be secure. The design might ask for a login screen with username and password, and optionally a few OpenID providers as alternative logins, but the amount of code to manage all this data and keep it secure is quite complex. These are real moments when you need to design some technical documentation first, which is something people often forget when working on an Agile project.

Still, you cannot blame the developer if the designer just writes a few lines and the developer chooses the first, slow solution. The result would be the requested task. It is the designer who needs to be aware of these possible performance pitfalls. And with Agile, you have a team. All team members should be able to point out that this simple description would have these pitfalls, thus making it a long and complex task. They should all realise that they will have to discuss possible solutions for this and preferably they do so as a team with just one computer. (The computer would be used to find information, not to write code!) Only when they agree on the proper solution then one or two of them could start writing code. And they would know how long this task will take. Thus, the task would finish within two sprints. In the first sprint, all team members would have a small task to meet and discuss the options. In the second sprint, one or more members would have a big task of implementing the code.

Or, to keep it simple: think before you start writing code!

Is XML in decline?

I happen to be one of those older software developers who saw the rise of XML. I even remember the older SGML standard, although I never used SGML. Version 1.0 of XML became an official standard in 1998. Once it became a standard, many companies started working to create the Killer App to work with XML without much of a hassle. And although at first many companies started to create their own XML parsers, not all of them were completely conform the standard. Those parsers disappeared fast enough too.

Right now, version 1.1 of XML is the latest standard. Yes, in 16 years not much has happened to this standard. And the changes that have been applied are more about supporting EBCDIC platforms and the newer Unicode definitions. There are discussions about a version 2.0 but it’s not likely to become a standard soon. Strange as it might sound, XML seems to be in decline if you look at how it’s used.

The power of XML was, of course, in the way how you defined these files and how you could do transformations on these file types. While we used DTD definition files at first to define the structure of an XML file, some smart people came up with the XSD schema format, which allowed more flexibility and is by itself an XML file. Combined with some nice, graphical tools, the XSD made it easier to define an XML file and to validate if an XML file conforms to the proper structure. And I’ve made plenty of XSD files between 2000 and 2010 since my work required a lot of XML data exchanges.

Of course, transformations are also important and here we use stylesheets. An XSLT file would be made in XML itself and define how you would convert an XML file to some other output format. In general, this output would be another XML file, an HTML document to display it in a web browser, a simple text file or even a comma-separated file. And in some special cases it could even create a complete rich text document that you could open in Word. This meant that you could e.g. send an XML file to a server and the server would then process it. It would validate the file with a schema and could do additional validation tools by using a style sheet. If it passed these validation style sheets, other stylesheets could then be used to extract data from the XML and send it to other servers for further processing, while it could also generate documentation to return to the user. You could do a lot of processing with just XML files.

Of course, XML also became popular because more developers started to create web services. And they used the SOAP protocol for this, which is a slightly complex protocol that’s heavily dependant on XML standards. Since SOAP also had some build-in version mechanism, you could always make sure if the client was still using the right SOAP definitions or not. You could even use several SOAP message formats on the same system with only the version number as difference. It wasn’t easy to set up, but it worked extremely well.
And more has been developed to support XML even more. The XPath expressions would allow you to point to specific elements within an XML document. With XQuery, you could execute queries on XML files and process the result. With namespaces you could even combine multiple XML definitions that uses similar entities. And then we have things like XLink, XPointer and XForms, which never have been very popular.

Between 2000 and 2010, it seemed that XML would be a dominating development technique. No more writing code in other programming languages that needed to be compiled, simply because XML happens to be a fast scripting environment. Many platforms started to have a standard for objects that could process XML files and knowledge of XML became a hard-needed requirement for developers. So, what changed?

Well, many developers consider the XML format a bit bulky, especially because tags are often used twice. Once to open the element and once to close it. Thus, if an element is called ‘NumberOfElements‘ then you have to write <NumberOfElements>10</NumberOfElements> and that’s a lot of text to store the number 10. As a result, some developers would then shorten those tag names so the resulting XML would be smaller. If you have 10,000 of these tags in your XML file, shortening it to TOE would save 26 characters per element, thus 260,000 characters in total. This doesn’t seem much but developers feel they gain more by these kinds of optimizations. With modern multi-core processors and systems with 8 or more GB of RAM, such optimizations might make the code half a second faster, which you barely notice with web services, but still… Developers think it saves a lot. And yes, when resources are truly limited, it makes a lot of sense but modern mentalities are that companies will just add a second server if one is too slow. Or more, if need be. This is because the costs of the more hardware is less expensive than the costs of having developers optimize the code even further.

These kinds of optimizations make XML files less human-readable while the purpose was to make this kind of data more readable. It becomes slightly worse when the XML file uses namespaces, since those namespaces are also shortened to just a few letters.

Another problem is the need to parse XML to extract the data. More and more companies are creating web applications that run within web browsers and heavily rely on JavaScript. These apps need to be able to run on multiple devices too. Unfortunately, not all browsers support parsing XML files and even those who do are a bit complex to use. With regular expressions it’s still possible to extract some data from the XML but if you need to fill a grid with 50 rows and 20 columns, things become real complex. And to solve this, developers started to send data to web applications as JavaScript instead of XML. This could then be executed and thus the data would load itself into memory. Since JavaScript objects are less bulky than the begin/end tags of XML elements, it made this new format very practical and thus JSON was born.

The birth of JSON also demanded a change in web services. Since web applications would call these services directly, it would be very clumsy if they have to set up SOAP messages and then parse the SOAP results. A newer, simpler style of web services arose, which uses the REST protocol. Of course, there are many other web service protocols but REST seems to become the new standard. Especially because it’s a simpler protocol that relies on the HTTP(s) protocol.

Of course, web applications have become more important these days because we’re getting more and more devices with all kinds of different operating systems, which all have web browsers. And, as I said, not all of those devices have a native XML parser built-in. They do support JavaScript though, and as a result it becomes quite easy to develop web applications for all devices which use data in JSON formats.

Of course, many devices also allow special platform-dependant apps that can be created with development tools for their specific platforms. For OS X and iOS-based devices you would use Objective C while you would use C++ or Java for Android devices. (Java is the preferred development platform for Android.) For Windows RT you would use .NET for Metro-style applications with either VB or C# as primary language. This makes it a bit difficult to develop software that runs on all three devices but there are several parties who have created compilers that will compile platformdependent executables from platform-independent code. Unfortunately, working with XML parsers still differs on all these platforms and those third-party compilers need to wrap their parsers around the built-in parsers of the underlying platform. That makes them a bit slow.

Since the number of operating systems have risen since the market starts getting more and more new devices, it becomes more difficult to keep a single standard that’s supported by all those systems. And the XML standard is quite complex so the different parsers might not all support the same things. In that regard, JSON is much simpler since these are just simple assignment statements. And these assignment statements are based on the Java syntax, which also happens to be similar to the C++, C# and Objective C syntax. The only difference with these languages is the fact that JSON puts the field names between quotes too, which you can’t do inside these languages.

So, XML is becoming less useful because it requires too much work to use. JSON makes data serialization simpler and is less bulky. Especially when developers are more focussing on web applications and apps for specific devices, the use of XML is in decline in favor of JSON and other solutions. But there’s one more reason why XML is in decline. And this is something within the .NET framework that’s called LINQ.

LINQ was implemented as a separate library for .NET version 3.5 but has become popular since then. Basically, LINK allows you to support data in a structured object and use simple queries to, or to execute transformations on extract data from those objects. This would be similar to XPath and XSLT but now it’s part of your development language, allowing you more choice in functions that you can apply to the data. This is especially important for date fields, since XML doesn’t work well with date formats. LINQ actually makes extracting data from object trees quite easy and can be used on an XML document if you’ve read this document in memory in a proper XDocument or XmlDocument object. Thus, the need for XSLT to transform data has disappeared since you can do the same in C#, VB, F# or Oxygene.

The result is that .NET developers don’t have to learn about XML anymore. Their .NET knowledge combined with LINQ is more than enough. Since .NET also allows serialization to and from XML formats, it’s also quite easy to read and write XML files in .NET. You can import an existing XSD file into your .NET application and have it converted to code, but since most XML data starts as objects that need to be stored in XML before serialization, you will often see that developers just define the objects and include attributes to tell if the object and its fields are elements or attributes, and have the serialization library use these object definitions to serialize it to and from XML. Thus, knowledge of XML schemas is not a requirement anymore.

Because .NET development made the dependency on XML knowledge almost obsolete, the popularity of XML is in decline. It’s still used quite often, but the knowledge that you need to do practical things with XML with XML tools is disappearing. And similar things are happening on other platforms. Java and PHP also started supporting LINQ queries. And, as a result, those environments can work on structured objects instead of XML data. Thus, XML is only needed if the data needs to be sent to some other process and even then, other formats might be chosen too.

In fact, many developers are less concerned about the data format that’s used for inter-process communication. The system is handling this for them and they just use a specific serialization library that does the bulk of the work for them. XML isn’t really declining, but less developers need knowledge about the XML format since development tools have nice wrappers around them that allow these developers to use XML without even realizing they’re using XML. It’s not XML that’s in decline. It’s the knowledge about XML that is in decline…

Motivating developers…

One of the biggest problems for software developers is finding the proper motivations to sit behind the screen for 8 hours per day, designing and developing new code, new projects. It’s generally boring work that requires a lot of mental efforts. And the rewards tend to be just more of the same work the next day, and the day afterwards. Creating new code or fixing existing code is like working in a factory in an assembly line, just placing a lid on a pot which someone else will close, over and over and over.

But developing code is a mental job, unlike adding lids to pots. During physical jobs, your mind can wander around to what you’re going to do in the weekend, what’s on television or whatever else you have on your mind. A mental job makes that very difficult since you can’t think about your last holiday while also thinking about how to solve this bug. And thus developers have a much more complex job than those at the assembly line. A job that causes a lot of mental fatigue. (And sitting so long behind a screen is also a physical challenge.)

Three things will generally motivate people. Three basic things, actually, that humans have in common with most animals. We all like a good night of sleep, we all like to eat good food and we’re all more or less interested in sex. Three things that will apply for almost anyone. Three things that an employer might help with.

First of all, the sleep. Developers can be very busy both at home and at work with their jobs. Many of them have a personal interest in their own job and can spend many hours at home learning, playing or even doing some personal work at their own computers. Thus, a developer might start at 8:30 and work until 17:00. The trip home, dinner and meet and greet with the family will take some time but around 19:30 the developer will be back online on Facebook and other social media, play some online games or study new things. This might go on until well past midnight before they go to bed. Some 6 hours of sleep afterwards, they get up again, have breakfast, read the morning paper and go back to work again.

But a job that is mentally challenging will require more than 6 hours of sleep per day. So you might want to tell your employees to take well care of themselves if you notice they’re up past midnight. You need them well-rested else they’re less productive. Even though those developers might do a great job, they could improve even more if they take those eight hours of sleep every day. And as an employer you can help by allowing employees to visit social sites during work hours since it will help them relax. It lowers the need to check those sites while they’re at home. The distraction of e.g. Facebook might actually even improve their mental skills because it relaxes the mind.

The second motivation is food. Employers should consider providing free lunches to their employees. Preferably sharing meals all together in a meeting room or even a dinner room. Have someone do groceries at the local supermarket to get bread, spread, cheese, butter, milk, soda’s and other drinks and other snacks. While it might seem a waste of the money spent on those groceries, the shared meal will increase moral, allow employees to have all kinds of discussions with one another and increases the team building. It also makes sure everyone will have lunch at the same moment, so they will all be back at work at the same time again.

Developers tend to have lunch between 11:30 and 14:00 and if they have to get their own lunch, it’s not unlikely for them to just go out to the local supermarket themselves or to bring lunch from home. When they go shopping for lunch, they would be unavailable during that time. Of course, lunch time is their own time, but if you need them you don’t want to wait until they’re back from the supermarket. And another problem is that those employees will start storing food at work in their desk or wherever else they can store it. This could attract mice, and I don’t mean computer mice but those live, walking and eating animals.

If an employer provides the lunch and other snacks, this also means there’s a generic storage for food products. This storage is easier to keep up than the desks of developers. Besides, those developers now know their food requirements are satisfied during work hours thus they feel more comfortable.

The third motivation is sex. And here, employers have to be extra careful because this is a very sensitive subject. For example, a developer might spend some time on dating websites or even porn sites. Like social websites, a small distraction often helps during mental processes but a social website might take two minutes to read a post and then respond. A dating website will take way more time to process the profiles of possible dating partners. A porn site will also be distracting for too long and might put the developer in a wrong mood.

The situation at home might also be problematic. An employee might be dealing with a divorce which will impact their sex lives. It also puts them back into the world of dating and thus interfere in their nightlife a bit more. This is a time when they will be less productive, simply because they have too much of their personal lives on their minds. And not much can be done to help them because they need to find a way to stabilize their personal lives again. Do consider sending the employee to a proper counselor for help, though.

Single developers might be a good option, though. They are already dealing with a life of being single and thus will be less distracted by their dates. Still, if they’re young, their status of being single might change and when that happens, it can have impact on their jobs. But the impact might be even an improvement because their partner might actually force them to go to bed sooner, thus fulfilling the sleep motivation.

Married developers who also have children might be the best option since their family lives will require them to live a very regular life. The care for their children will force this regularity. But the well-being of those children might cause the occasional distractions too. For example, when a child gets sick, the developer needs someone to care for the child at home. And they might want to work at home a few days a week to take care of their children.

As an employer, you can’t deal with the sex lives of your employees at work. Those things are private. However, it can be helpful for employees if they can spend more time at home, in a private area, if they have certain needs in this regard. Allowing them to work at home would give them some more options. Since they don’t need to travel to work, they have more time available. If they decide to visit a dating site for half an hour, they could just work half an hour longer and no one would even know about it. If their child is sick, they can take care of them and still work too.

In conclusion, make sure your employees sleep well, give them free lunches and other snacks at the workplace and allow them to work at home for their personal needs. This all will help to make them more productive and allow them to improve themselves.

To Agile/Scrum or not?

The Internet is full buzzwords that are used to make things sound more colorful than they are. Today’s buzzword seems to be “Cloud solutions” and it sounded so new a few years ago that many people applied this term to whatever they’re doing, simply to be part of the new revolutions. Not realizing that the Cloud is nothing more than a subset of websites and web services. And web services are a subset of the thin client/server technologies of over a decade ago. (Cross-breeding Client/Server with the Web will do that.) It’s just how things evolve and once in a while, a new buzzword needs to be created and marketeers are now working on the next buzzword that should make clear the Cloud is obsolete. Simply because new products need to be sold.

Still, the Software Development World hasn’t been quiet either. In the past, a project would be completed through a bunch of steps. It would start with an idea that they would turn into a concept. And this concept would include all requirements for the project.  Designers would then be called to come up with some basic principles and additional planning. When they’re done, they start to implement things, which would include methods to integrate the project into existing products and basically writing all code. It would then be tested and once the tests are satisfying, the whole project could be deployed and the maintenance would start.

If the project had problems in one of these steps, they would often have to go back one step. (Or more, in rare occasions.) This principle is called the “Waterfall model” and it’s drawback is that every step could take weeks to finish. It generally means that you can only update twice per year. Not very popular, these days.

So, new ideas were needed to make it possible to create updates more often. It started with the Agile Manifesto in 2001 and it has become a very popular method these days. Most groups of developers will have heard about it and have started implementing its principles. Well, more or less…

Agile has just four basic rules to keep in mind:

Individuals and interactions over processes and tools.
Working software over comprehensive documentation.
Customer collaboration over contract negotiation.
Responding to change over following a plan.

That’s basically the whole idea. And it sounds so simple since it makes clear what is important in the whole process. Agile focuses a lot on teamwork and tries to keep every team member involved in the whole process. Make sure every member is comfortable with the whole process and basically, talk a lot with one another over the whole process. People tend to forget it, but communication is a key element between people.

Of course, whatever you publish should work, and work well enough so users don’t complain about crashing applications or lost data. You might be missing features that customers would like, but that should not be the main focus of the whole process. Keep it working and keep the customer happy.

Of course, since you’re dealing with customers, you will need to know what they actually want. It’s fine if the CEO decided that the project needs methods X and Y to be implemented but if all customers tell you they want methods A or B implemented, then either the CEO has to change his mind or the company should start looking for a new CEO.

And keep in minds that things change, and sometimes change real fast. It’s hard to predict what next year will bring us, even online. Development systems get new updates, new plug-ins and new possibilities and you need to keep up to be able to get the most out of the tools available.

So, where do things go wrong?

Well, companies tend to violate these principles quite easily. And I’ve seen enough projects fail because of this, causing major damage or even bankrupt companies simply because the company failed at Agile. Failure can be devastating with Agile, since you’re developing at high speeds. And we all know, the faster you go, the harder you can fall…

Most problems with Agile starts with management. Especially the older managers tend to live in the past or don’t understand the whole process. Many Scrum Sprints are disrupted because management needs one or more developers from that sprint for some other task. I’ve seen sprints being disrupted because a main programmer was also responsible for maintaining a couple of web servers and during the sprint, one of those servers broke down. Since fixing it had priority, his tasks for that sprint could not be finished in time and unfortunately, other tasks depended on this task being ready.

Of course, the solution would be that another team member took over this task, but it did not fit the process that the company had set up. This task was for a major component that was under control by just one developer. Thus, he could not be replaced because it disturbed the process. (Because another developer might have slightly different ideas about doing some implementations.)

Fortunately, this only meant a delay of a few weeks and we had plenty of time before we needed to publish the new product. We’d just have to hurry a bit more…

Agile also tends to fail when teams don’t work well together. Another company had several teams all working on the same project. And unfortunately, the project wasn’t nicely divided in pieces so each team had its own part. No, all teams worked on all the code, all the pieces. And this, of course, spells trouble.

When you have multiple teams working on the same code, you will often need an extra step of merging code. This is not a problem is one team worked on part A and the other on part B. It does become a problem when both teams worked on part C and they wrote code that overlaps one another. Things will go fine when you test just the code of one team but after the merge, you need to test it all over again, thus the whole process gets delayed by one more sprint just to test the merged code. And it still leaves a lot of chances for including bugs that will be ignored during testing. Especially manual testing, when the tester has tested process X a dozen of times already for both teams and now has to test it again for the merged code. They might decide to just skip it, since they’ve seen it work dozens of times before so what could go wrong?

As it turns out, each team would do its own merging of the code with the main branch. Then they would build the main branch and tell the testers. Thus, while testers would be busy to test the main branch that team 1 provided, team 2 is also merging and will tell them again, a few days later. The result is basically that all tests have to be done over again so days of testing wasted. Team 3 would follow after this, thus again wasting days of testing. Team one then decides to include a small bugfix and again, testing will have to start from the beginning, all over again.

With automated testing, this is not a problem. You would have thousands of tests that should pass and after the update to the main branch, those tests would start running from begin to end. Computers don’t complain. However, some tests are done manually and the people who execute those tests will be really annoyed if they have to do the same test over and over with every new build. It would be better if they’d just try to automate their manual tests but that doesn’t always happen. So, occasionally they decide that they’ve tested part X often enough and it never failed so why should it fail the next time?

Well, because team 1 and team 2 wrote code that conflicts with one another and that code is in part X. The testers skip it, thus the customer will notice the bug. Painful!…

There are, of course, more problems. I’ve seen a small company that had a nice, exclusive contract with a very big company. Lets call them company Small and company Big. Company Small had created a product that company Big really liked so they asked for an exclusive version of it, with features that company Big would choose. And this would be a contract that would be worth tens of millions for company Small and its ten employees.

And things would have gone fine if company Small had not decided to continue working on its own products and just focused on delivering what company Big wanted, and to deliver in time. But no, other things were more important and the customer would just get what company Small made, with some minor adjustments. And the CEO was quite happy with this progress. That is, until the customer noticed that they did not hear his wishes. All company Big was supposed to do was sign the contract and pay the bill. And once things were done, they would just have to accept what was given to them. So company Big found another company willing to do the same project and just dumped company Small. End of contract and thus end of income, since company Small just worked exclusively for the bigger company. And within five months, company Small went tits-up, bankrupt. Why? Because they did not listen to the customer, they did not keep them happy.

And another problem is the fact that companies respond very slowly on changes. I’ve worked for companies that used development tools that were 5 years old, simply because they did not want to upgrade. I still see the occasional job offering where companies ask for developers skilled with Visual Studio 2008 while there are three newer versions available already. (Versions 2010, 2012 and 2013.) In 2003 I was still working on a 16-bit project that was meant to be used by Windows 3.1 and up, simply because one single user still used an old Windows 3.11 system. At least, we thought they did because no one ever asked them if they’ve upgraded. And that customer never told us that they had indeed upgraded and didn’t think of asking for a 32-bit version…

I’ve seen management hang on to a certain solution even though there’s plenty of evidence that newer options are available. I’ve developed software on 32-bit systems with 2 GB of memory when 64-bit systems were available and had up to 8 GB of memory, plus more speed. I had to use a single-monitor system on a PC that had options for multiple monitors plus we had extra monitors available, but management considered it a waste. The world is changing and many systems now easily support two or more monitors but some companies don’t want to follow.

So, what is Agile anyways? It’s a method to quickly respond to changes and desires of customers with a well-informed team that feels committed to the task and to deliver something the customer wants. (And customers want something they can use and which works…)

Would there be a reason not to use Agile? Actually, yes. It’s not a silver bullet or golden axe that you can use to solve anything. It’s a mindset that everyone in the team should follow. One single member in the team can disrupt the whole process. One manager who is still used to “the old ways” can devastate whole sprints. When Agile fails, it can fail quite hard. And if you lack the reserves, failure at Agile can break your company.

Agile also works better for larger projects, with reasonable big teams. A small project with one team of three members is actually too small to fully implement the Agile way of working, although it can use some parts of it. Such a small team tends to make planning a bit more difficult, especially if team members aren’t always available for the daily scrum meetings. When you’re that small, it’s just better to meet when everyone is available and discuss the next steps. No clear deadlines, since the planning is too complex. What matters is that goals are set and an estimation is made when it is finished. Whenever the team meets, they can then decide if the estimation is still correct or if it needs to be adjusted.

Another problem can be the specialists that are part of the team. Say, for example, that you have a PHP project that needs to communicate with a mainframe and some code written in COBOL. The team might have hundreds of PGP developers but chances are that none of them know anything about COBOL. So you need to have a COBOL specialist. And basically he alone would carry the tasks of maintaining the mainframe side of the project. You can make him part of the Scrum meetings but since he has to do his part all by himself, he doesn’t have much use for the other team members. So again, just decide on a specific goal and estimate when it should be finished. Get regular updates to allow adjustments and let the COBOL developer do his work.

The specialist can become even more troublesome if you have to interact with a project that another company is creating. If you do things correctly, you and the other company would discuss a generic interface for the interaction between both projects. You would then both build a stub for the other company to use for testing. This stub just has to offer some dummy information, but it should be usable.

When both companies have the stubs they need, they can each work on their part. They will have to keep each other informed if some parts of the interface need to be changed or if some rules are changed about the data that can be provided. Preferably, this is done by providing a new stub. Both teams will have just one goal, which is providing all the required methods that are part of the stubs. And when parts are fully implemented, they can offer the other company with new stubs that contain some working parts already.

Still, when two companies have to work together this way, they have to think small. Don’t create a stub with thousands of methods for all the things you want to add during the next 5 years. Start small. Just add things to the stub that you want to finish for the next sprint. Repeat adding things per sprint and communicate with the other company about what they’re going to add next. You don’t have to work on the same method of the stubs anyways. One company might start working on the GUI part that allows users to enter name, address and phone number while the other works on storing employment data and import/export management. The stubs should just give dummy methods for those parts that aren’t implemented yet. Each company should develop the parts that they consider the most important, although both should be aware that everything is finished only if all stub methods are implemented.

Agile is just a mindset. If used properly, it can be very powerful. However, do keep in mind that not all of Agile might be practical for your own situation. Agile requires a lot of time for meetings with developers, with customers and with management. Everyone needs to be involved and everyone needs to be available for those meetings. Scrum becomes more difficult if not all team workers are available on all five workdays of the week. And worse of all,, team members will have to prepare for the meetings. Even for the daily meetings since they have to keep track of their own progress.

Do not fear to just implement part of the whole Agile/Scrum principle. It is made to hybridise with other methods. Use the methods, don’t let the method force itself upon you.

The FBI in Lithuania wants to pay me 15 million dollars…

 

 

 

I do love some of the spam messages I receive. Especially when the spammers try to pretend they’re the FBI or other important organisation and they want to pay me a few millions. And I can’t really imagine that some people are stupid enough to fall for this. Then again, if they send 5 billion of these messages, the chance is quite big for them to find an idiot or two willing to fall for this.

Those people must be even more brain-dead than the spammers…SpamThis is not a very expensive scam. They just ask for 420 USD instead of thousands of dollars. A payment for the ownership papers or whatever. And they tell me to stop being in contact with the other scammers, which is very good advise.

So? Well, it starts with Mrs. Maria Barnett from Canada. The address seems real, although it has been misused by plenty of other spammers. The address is actually used by an organisation with domain name standardchart.org and is registered by Joseph Sanusi. Too bad that name sounds a bit suspicious since there’s someone in Nigeria with the same name. (The governor of the Central Bank of Nigeria.) He is 75 and I don’t think he’s the spammer, so someone else either has the same name or they’re faking things even more. The domain name is registered but doesn’t seem to be linked to any site or server, because it’s pending a deletion.

Then they refer to Mr. Fred Walters of the FBI. Fred helped Maria to get their money from some Nigerian bank, and they got even a lot more. He even showed her a list of other beneficiaries and my name was on the list and I am eligible to get lots of money too. All I have to do is contact Fred on the email address of Steve Reed in Lithuania, who seems to work at super.lt, which is a Lithuanian website. I don’t really understand the language but Google Translate does. It seems to be an online book store. A strange place for the FBI. I would expect the CIA in that place instead.

Maria herself seems to work for Shaw, a Canadian internet shop. They sell televisions, phones and other stuff. So we have two shops in two different countries that are somehow related by some victim of a Nigerian 419 scam and a FBI agent.

Now, the email headers, visible at the bottom, show some more interesting connections. For example, I notice the name ‘Dealer.achyundai.com’, another chain in the spiderweb of the scammers. That domain is also pending deletion too. The IP address 67.211.119.59 seems to be down too, so it’s likely the scammers have already been taken down.

But this spam message just shows how dumb the spammers make their requests and yet people keep falling for it. If the story was more logical and the email addresses and domain names had actually been more real  then I could understand why people fall for this. But this?

Delivered-To: ********@********.***
Received: by 10.50.87.105 with SMTP id w9csp17960igz;
        Sat, 1 Feb 2014 05:42:38 -0800 (PST)
X-Received: by 10.50.80.75 with SMTP id p11mr1777051igx.19.1391262158192;
        Sat, 01 Feb 2014 05:42:38 -0800 (PST)
Return-Path: <mrs.mariabarnett@shaw.ca>
Received: from Dealer.achyundai.com ([67.211.119.59])
        by mx.google.com with ESMTPS id x1si3519252igl.27.2014.02.01.05.42.07
        for <********@********.***>
        (version=TLSv1 cipher=RC4-SHA bits=128/128);
        Sat, 01 Feb 2014 05:42:38 -0800 (PST)
Received-SPF: softfail (google.com: domain of transitioning mrs.mariabarnett@shaw.ca does not designate 67.211.119.59 as permitted sender) client-ip=67.211.119.59;
Authentication-Results: mx.google.com;
       spf=softfail (google.com: domain of transitioning mrs.mariabarnett@shaw.ca does not designate 67.211.119.59 as permitted sender) smtp.mail=mrs.mariabarnett@shaw.ca
Received: from User (unknown [207.10.37.241])
    by Dealer.achyundai.com (Postfix) with ESMTP id 02525A7FA30B;
    Sat,  1 Feb 2014 06:57:03 -0500 (EST)
Reply-To: <stevereed1@super.lt>
From: "Mrs. Maria Barnett"<mrs.mariabarnett@shaw.ca>
Subject: Make Sure You Read Now.  
Date: Sat, 1 Feb 2014 06:57:10 -0500
MIME-Version: 1.0
Content-Type: text/html;
    charset="Windows-1251"
Content-Transfer-Encoding: 7bit
X-Priority: 3
X-MSMail-Priority: Normal
X-Mailer: Microsoft Outlook Express 6.00.2600.0000
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2600.0000
Message-Id: <20140201115704.02525A7FA30B@Dealer.achyundai.com>
To: undisclosed-recipients:;