TFL’s new website coping well during #tubestrike

I recently read an article in ComputerWeekly describing how TFL had redesigned their website using HTML5 to optimise performance across multiple device types. I was interested to see how the new site was handling the likely increase in traffic due to the tube strike.

Prior to the HTML5 re-write the last major redevelopment of this site had been in 2007, well before the  proliferation of mobile devices, now used daily to check for travel updates or plan journeys. 75% of Londoners visit the TFL website regularly and there are 8 million unique visitors per month.

I was in London earlier this week so I,  along with millions of other commuters, wanted to keep up to date with the news of the tube strike. I, like many others, turned to my smartphone for answers. The site performed well on my Android phone, so I wondered whether the increase in traffic had caused any performance degradation.

At Trust IV we have developed an in-house application to test the performance of websites, we monitor performance for several hundred sites, each of which is categorised into a relevant business sector. TFL was already being monitored in our “travel” category. I was impressed to see that the website was the 3rd fastest travel site monitored today with a page response time of <1.2 seconds. The site developers should be pleased with themselves.

TFL_WPT report

 

Although occasional spikes in response times were observed (which is common when monitoring in this way); on the whole the site remained responsive throughout the day. Average response times appear no slower today than they were last week (the chart below shows response times in milliseconds).

TFL_day

If only more of the sites that I visit regularly performed as well as this.

Get in touch for more information about our “Test The Market” monitoring application and how it can give you insights into your own website performance and see how your performance compares with your competitors.

 

See more articles like this, and download the response time report at:
http://blog.trustiv.co.uk/2014/02/tfls-new-website-coping-well

Unzipping files in Powershell scripts

I’ve been working for some time on a project which is deploying a complex application to a client’s servers. This project relies on Powershell scripts to push zip files to servers, unzip those files on the servers and then install the MSI files contained within them. The zip files are frequently large (up to 900MB) and the time taken to unzip the files is causing problems with our automated installation software (Tivoli) due to timeouts.

The scripts are currently unzipped using the Copyhere method.

Simple tests on a Windows 8 PC with 8GB RAM and an 8 core processor although a single SATA hard drive show that this method is “disk intensive” and disk utilisation as viewed in Task Manager “flatlines” at 100% during the extraction.

I spent some time looking at alternatives to the “Copyhere” method to unzip files to reduce the time taken for deployments and reduce the risk of Tivoli timeouts which were affecting the project.

Method

A series of test files were produced using a test utility (FSTFIL.EXE), FSTFIL creates test files made up of random data. These files are difficult to compress due to the fact that they contain little or no “whitespace” or repeating characters, similar to the already compressed MSI files which make up our deployment packages.

Files were created that were 100MB, 200MB, 300MB, 400MB and 500MB. Each of these files were zipped into similar sized ZIP files. As well as this a single large ZIP files containing each of the test files was also created.

Tests were performed to establish the time taken to decompress increasingly large ZIP files.

Test were performed to establish whether alternative decompression (unzip) techniques were faster.

Observations

The effect of filesize on CopyHere unzips

Despite initial observations, after averaging out the time taken to decompress different sized files using the CopyHere method the time taken to decompress increasingly larger files was found to be linear.

CopyHere

The difference between CopyHere and ExtractToDirectory unzips

To do this comparison, two PowerShell scripts were written. Each script unzipped the same file (a 1.5GB ZIP file containing each of the 100MB, 200MB, 300MB, 400MB and 500MB test files described earlier). Each script calculated the elapsed time for each extract, this was recorded for analysis.

Unzips took place alternately using one of the two techniques to ensure that resource utilisation on the test PC was comparable for each test.

ExtractToHere

No detailed performance monitoring was carried out during the first tests, but both CPU and disk utilisation was observed to be higher (seen in Task Manager) when using the CopyHere method.

Conclusion


The ExtractToDirectory method introduced in .Net Framework 4.5 is considerably more efficient when UNZIPPING packages. Assuming that this method is not available, alternative techniques to unzip the packages, possibly including the use of “self extracting .exe” files, the use of RAM disks  or memory-mapped files to remove disk bottlenecks or more modern decompression techniques may reduce the risk of Tivoli timeouts and increase the likelihood of successful deployments.
Powershell scripts used

Scripts

 

 

 

Outlook and GMail woes

I use Outlook 2013 to syn with Gmail and I’ve been faced with this annoying pop up every 15 minutes or so.
“Your IMAP server wants to alert you to the following: Message too large. http://support.google.com/bin/answer.py?answer=8770.”

IMAP_Error

This link takes me to some advice about adding and removing attachments which doesn’t help me to resolve the problem. I came to the conclusion that in my GMail account, I must have had an attachment larger than my Exchange server attachment size limit (which I think is the default 20MB).

I found that I can search GMail for files larger than a certain size, like this…
size:20M 
or
size: 30M

GMail_search

Searching for above 20MB files gave me a long list, so I decided to search above 30MB and I found a single email with a large Word document attached (the attachment was a series of maps that I copied from the Internet to help teach Cub Scouts map reading).
CubsMaps
Once I deleted this attachment, I fixed my problem. Now I just need to fix the “Outlook never completes it’s index of GMail” problem and I;ll be happy 🙂