Author Archives: %s

Digital Banking – Testing OCBC API

23 May 16
, ,
No Comments

OCBC API – Overview and sample code

Here it is, finally a Singaporean bank is jumping into the API space ! I’ve tested the OCBC API for you.

Last week, OCBC announced that they will publish 4 API’s to developers. Naturally, I was curious, so I’ve played a bit with those API.

The API portal is quite neat, you can access it here

The 4 API currently available are

  • Branch Locator – Provides list of branches
  • Credit Card Advisor – Provides credit card suggestions
  • ATM Locator – Provides list of ATMs
  • Forex – Provides list of updated currency exchange rates

Note that Credit Card Advisor is the only API that can accept a Parameter. The Parameter will be a keyword that will be matched against each descriptions associated with OCBC credit cards, to provide credit card recommendation.

How to use the API

Step #1 : Create an account through the portal sign-up form here (, pretty straightforward. The process is instantaneous, and will provide you free access to a “Bronze” profile allowing you to perform 1 query per minute against the API.

Note : the special characters are not authorized in email addresses, despite being perfectly standard (as per RFC2821, 2822, 3696).

Step #2 : Create an Access Token for your application

For the time being, the authorization is based on Access Token (like a lot of similar API). It is possible that other forms of authentication may come at a later stage (certificates, OTP, etc).

The portal provides you with a Default Application. You can create more applications, which will have their own respective Access Tokens, with SLA. For example, you can imagine having a Test Application with a Bronze subscription (1 query per minute) and a Production Application with another SLA (for the time being, you need to contact OCBC to change your “level”).

In the API Console, select your application, and click Generate, to create your Access Token.


Note : the default lifetime of the token is 3600 sec (1h), you have to specify -1 if you don’t want the token to expire.

Step #3 : Once your Access Token is created, you need to subscribe your application to the API(s) you want to use.

Back into the API console, select your application and click Subscribe.


The respective API will then be added to your subscription.


Step #4 : Test it

You can test the API directly – without writing any code – as the OCBC team included sample queries.

Again, the action takes place in the API Console (if you don’t want to start writing code straight away)

On the Testing section, select your Application (the one you’ve generated an Access Token and subscribed to API). The Request Header will automatically be updated with your Access Token.


Click on Get, then Try it out buttons. The API Console is then using Curl to generate a query to the API. The Response Body is using a basic JSON structure.

That’s it ! You can see here the list of branches, response to a GET query to the Branch Locator API


Step #5 : Code !

Take your favorite language, and simply use the URL, with the Access Token in the header.

Below a Powershell example (yeah – I know – it is probably not the most widely used language to query a web service, but I like it anyway !)

# Ocbc-branches.ps1

# Put your OCBC API Key here
$APIKey = "Insert_YOUR_API_Key_Here" 

# The URL to the Branch Locator API 
$OCBC = "" 

# Create the headers with your API
$headers = @{
    "Authorization" = "Bearer $APIKey"
    "Content-type" = "application/json"

# Query the web service
$j = Invoke-RestMethod -uri $OCBC -Headers $headers

# Display the results

for($branchCount = 0; $branchCount -le $j.branches.Count; $branchCount++)
    Write-Host $j.branches[$branchCount]


Done ! Your JSON object now contains the response body, and can be accessed with the usual . notation (see the portal documentation for the details)


Note : don’t forget that by default, the Access Token is expiring after 1h !

Step #6 : Publish to the world !

If you want to Go Live and publish your brand new app, you will very likely need much more queries per minute that the Bronze subscription…

To do so, you can contact OCBC, that will review your development. I don’t know if they will charge the higher subscription levels … But even if they may do so, I think
(and hope) the fee will be reasonable (like Azure or AWS API costs, priced few cents per 100,000 transactions – or similar model).



The OCBC APIs are quite easy to use and integrate into custom code. The API subscription process is clear and straightforward.

Obviously, the amount of data published is very limited so far. For the time being, there is no statement or transaction data, which can dramatically change the banking application environment in Singapore – once available.

But at least it is a start. OCBC got an API published, and a portal in place to manage it.

How close are they to enrich the API with transaction data ? How far did they went to integrate their core banking system to the (potential) future API ?

This remains to be seen…

If anybody knows more, feel free to drop a comment below !


Pierre-Olivier Blu-Mocaer






Screenshot of Tenant diagnostics

Office 365 Diagnostic Tool

25 Nov 15
, , , ,
No Comments

Office 365 Diagnostic Tool

Everything you always wanted to know about your Office 365 data centre, geographical location, tenant and your connectivity to it.

This PowerShell script was written to help Office 365 connectivity and performances diagnostics.
The script also try to provide you with more geographical details on your tenant. It’s still an early version / work in progress initiative, and your feedback is welcome.

Download it from here : Office365-checker.ps1

Script features

– Verify if all Office 365 URL’s are bypassed on your Internet proxy (as per MS recommendation, source of URL is
– Provide a detailed traceroute to your Office 365 closest datacenter. This custom traceroute function uses an external configuration file to increase geographical details provided ( configuration file is here : )
– Detect geographical location of your Office 365 tenant and mailbox (based on the same configuration file than above)
– Provide your external public IP and it’s geo-location
– Detect your closest Exchange Online CAS server, its location and average latency


Exchange Online (Office 365) regular account, Windows 8.1 and above + PowerShell 4.0 with Azure modules (run as administrator), MS Sign-On


Office365-checker.ps1 -Action <parameter>

Parameters :

All : Perform all Office 365 tests
Proxy : Verify that Office 365 URL’s are bypassed on proxy
Traceroute : Detailed traceroute to Office 365 closest data center
Tenant : Detect geo-location of your Office 365 tenant and mailbox
Geo : Get your public IP and its location
CAS : Detect and ping your Exchange Online CAS server



Proxy verification and Traceroute from Singapore to Australia Office 365 DC (Click to enlarge)

Screenshot of Traceroute feature

Screenshot of Traceroute feature

Tenant geographical details for a South-East Asia based subscription (Click to enlarge)

Screenshot of Tenant diagnostics

Screenshot of Tenant details

Call for feedback and beta testers

The geographical information used to create the XML configuration file located here were found from my own investigations and research.

If you have more details, or an unknown location detected for your tenant and so on, it will be great if you can send me an email or drop a comment below, for me to improve the script.

Air Traffic Control Power Fault – When routine test fails

29 May 15
, , , ,
No Comments

“More than 200 flights in and out of Belgium were cancelled or diverted on Wednesday 27th May after a power surge disabled the operations of Belgocontrol, the domestic air traffic controller” (Source : Reuters)

In this post we will try to go behind the scene of what may have happened at Belgocontrol earlier this week. We will discuss Air Traffic Control, data centres, generators, UPS, Disaster Recovery, RTO and so on.

Reading time : ~10 mins

Update :

2nd June 2015 – The generator was not grounded (Source)

8th June 2015 – The generator was not grounded since its installation in 2005 (Source)

18th November 2015 – I’ve received an important update from an anonymous source.

While I’m usually trying to source all information I’m basing those analysis on, I found this one credible and interesting enough to be quoted here (with source’s agreement).

“- First of all, there was never a power loss in either of the four technical rooms (separated in different buildings btw).
– The power loss was only at the controller working position.
– The problem was that the power spike killed the static switches in the working position. Those switches can switch between the main and backup power supply.
– There was never a problem in the control tower and regional airports because the technical rooms were fully operational the whole time. “

So, in the conclusion of the initial analysis earlier this year, I’ve raised some questions on why the whole data center went down due to this power spike.
The information above is now clarifying this point, as the data center itself never went down. Only the air traffic controllers workstations were impacted.

This is also a good reminder that the end-user workstations can be as critical as any other IT systems.

Thanks to the anonymous contributor for the clarification

What’s Belgocontrol ?

Belgocontrol is “an autonomous public company in charge of the safety of air traffic in the civil airspace for which the Belgian State is responsible”.
The controlled civil airspace consists of the local airspace (CTR – Control Zone) and the terminal areas (TMA – Terminal Area) of the airports of Antwerp, Brussels Airport, Charleroi, Liège and Ostend. Apart from these areas, the airspace is organised as a network of airways and specific controlled areas (CTA) (Source :

The scope of Belgocontrol ATC (Air Traffic Control) is limited to a part of the local Belgian airspace. Basically, Belgocontrol will manage planes landing and taking off from Antwerp, Brussels, Charleroi, Liege and Ostend airports, as well as the local traffic flying below 8,000 metres FL245 (Flight Level 245 = 24,500 feet = ~8,000 metres altitude).
Planes transiting above Belgium are managed by Eurocontrol, an European ATC (which is targeted to manage the Single European Sky , which you may have heard of during some countries ATC strikes…).

Contrary to popular belief, an airspace is an extremely controlled area, with different organisations in charge of  different geographical zones and flight levels.
You can see below an example of the various “layers” you will have to cross to land in Brussels/Charleroi zone (Source :

West ACC = Eurocontrol West Area Control Center
EBBR APP = Brussels Airport Approach
EBCI TMA  = Brussels Charleroi Terminal Manoeuvring Area
EBCI CTR  = Brussels Control Traffic Region

Then you will be handed to the famous “control tower”, which will manage only the final parts of the flight (people managing your flight en-route are not based in this tower).

To (over) simplify, for planes who needs to take-off and land in Belgium, Belgocontrol is in charge.

Belgocontrol is located at Brussels airport, near the runways.

Belgocontrol and CANAC2

Belgocontrol and CANAC2



CANAC 2 stands for “Computer Assisted National Air Traffic Control Centre”. It’s been inaugurated in 2010, and uses a Thales ATM (Air Traffic Management) system called Eurocat, now renamed TopSky (Sources : Thales , Thales).

Eurocat / TopSkys runs a modified version of Red Hat Enterprise Linux called Thalix.

CANAC 2 control room looks like this.

Canac 2 room

You can see a bit more of the CANAC 2 room in this short video.

CANAC 2 Datacenter

From Belgocontrol : “In CANAC 2, there are several levels of redundancy of the systems crucial to air traffic safety: the Nominal, the Fallback and the Ultimate mode. The first is the normal operating mode. Fallback and Ultimate, respectively, are the first and second backup levels. They use independent systems which feed the work positions with radar and flight plan data and maintain vocal communications.
In addition to these backup modes, these systems are physically duplicated and installed in two separate computer rooms, far apart from each other and powered by different electrical networks.

Those computers rooms seems hosted within CANAC premises. I was unfortunately not able to find a picture of the datacenter(s) hosting the CANAC 2 Eurocat system.
However, some details are available on CANAC 1 (the former Belgium Air Traffic Control Center), from Schneider .


Schneider has commissioned the electrical part of the former 2,000 sqm datacenter, splitted in 3  rooms , in addition to a 400 sqm backup room.
I don’t know if those rooms are still currently in use, or if Belgocontrol built new computer rooms for CANAC 2. If someone knows more, feel free to leave a comment below.

Update 8th June 2015 : from a recent update the faulty generator grounding was wrongly installed since 2005 – so it is likely that the datacenter above is the one who went down.

UPS and Generators

A UPS is an Uninterruptible Power Supply. It’s an electrical appliance used to provide power in case of power failure. There is a lot of different types and capacity of UPS.
To (over)simplify again, a UPS is a dedicated hardware connected to a set of batteries, able to provide power to computers.

UPS and batteries

UPS and batteries

The power needed to run a datacenter is very significant. In case of outage, UPS batteries will only last few minutes, sometimes up to 30mins. In case of prolonged power outage, a UPS will not be able to do more than power your servers for few minutes.
In modern datacenters, UPS just providing a temporary backup power, time for the generator to kick-in.
But UPS also have other important functions : they can correct power fluctuation problems, like voltage spike, frequency problems and “line noise” (we will get back to this in the Generators section). A stable (frequency, voltage, etc) power is called a “clean power”.

To do all this (backup power + fluctuation correction), a UPS needs to be installed “in-line” : between the equipment to power and the power source. When the normal power source is running and stable, the UPS is controlling the power and delivering it to the equipment. As soon as a problem is detected, the UPS triggers some actions : it can be to correct a wrong voltage, frequency, or switch to batteries if the main power source fails.

A generator is (usually) a diesel or gas-powered engine, producing power, installed in or nearby the datacenter to protect.



Update 08th June 2015 : Media were invited to visit the datacenter, a video footage is showing 2 generators and the electrical panels.

BelgoControl generators

BelgoControl generators


A generator, due to its physical characteristics, will provide a “dirty power”, with fluctuations in frequency, electrical noise and so on.

Dirty Power

Dirty Power

While it’s not an issue when you are using a small generator to power a light bulb or two during a camping trip, it will be a catastrophic event for an IT equipment.
Fortunately, in a datacenter, the UPS – connected in front of the IT equipment –  will correct this and protect the IT load by providing a “clean power” to the equipment.


Clean Power

Clean Power

 What can go wrong ?

Like any equipment, UPS and generators needs to be regularly maintained, monitored and tested.

It is common (and a best practice) to regularly start the generator, to ensure it works properly. You may sometimes have seen a strange plume of black smoke in some busy business districts (like Paris – La Defense area), a good chance it was a genset (generator set) test.

Different genset tests can be performed. You can decide to fail-over your regular power source to the generator, or not.
Obviously, failing over to the generator is more risky, but it’s also a more complete test. I don’t know which type of test was scheduled at Belgocontrol.

To ensure complete power redundancy in a datacenter you can have two (or more) generators, powering two fully independent power lines to two separate sets of UPS, each one powering one side of the IT rack and the (dual-power) servers.
Of course, you’ve to ensure that your servers are dual-powered and each of the power supply is connected to a different power line (trust me, mistakes happens).

Anyway, duplicating everything is costly, and usually organisations which are not datacenter companies (like Equinix and co) will only have one generator, but two power lines to the racks.

It’s not clear which kind of setup Belgocontrol owns. But, the media have reported the following statement:

“Dominique Dehaene of the Belgocontrol agency said that a sudden power surge had taken out the main air traffic control system and also blew the switches to the emergency generators. “We were twice unlucky,” he said” (Source : AP)

Update 08th June 2015 : there is two generators at BelgoControl as they were recently shown on TV.

Personally, I think the term power surge should probably be replaced by power spike. A power spike will be a very short high-voltage burst (few ms) , while a power surge will be a (usually lower) voltage increase on a slightly longer period (secs).

A spike is common when starting an equipment (electrical or not), it’s called the Transient state

Let’s draw some hypothesis.

Note : there is two generators at BelgoControl as they were recently shown on TV – but this doesn’t change the overall principle

Hypothesis 1

For this one, we will assume :

– There is only one generator at Belgocontrol CANAC 2 Production Datacenter
– The generator test was not a end-to-end test (i.e. the power to the racks was NOT failed-over to the generator).
– We have separate power lines and UPS to the racks

schema 1

The generator start-up may have produced a power spike,violent enough to blow-out the control panel #1.
UPS #1 will have done two things : protected the IT Racks from the surge (if the control panel haven’t done it before, blowing up like a fuse !) , and provided electrical power to the Circuit A in the racks.
Even with UPS#1 blowing up or faulty, the circuit B (still up as per our assumptions) + UPS#2 should have been enough to maintain power to the IT equipment (providing the racks and servers were dual-powered).

Hypothesis 2

For this one, we will assume:

– There is only one generator at Belgocontrol CANAC 2 Production Datacenter
– The generator test was not a end-to-end test (i.e. the power to the racks was NOT failed-over to the generator).
– For some reason, we don’t have separate end-to-end power lines to the racks.

schema 2

In that case, providing the Control Panel #1 didn’t stopped the spike, the UPS #1 and #2 may have been damaged …

Or maybe was there only one UPS for both lines and this UPS died during the spike ? Or maybe the racks were not really dual-powered …?

Update #1 : 2nd June 2015, the generator was not properly grounded (Source).

This lack of proper grounding prevented the power spike / surge to be properly discarded, blowing up one or more static switches (referred as control panel in diagram above).

Update #2 : 08th June 2015, the generator was not properly grounded – since its installation in 2005 (Source)

An electrical default occurred in one of the generator cooling sub-systems (Source), during the monthly test. Due to the lack of grounding on the generator, the power had no other choice but to go through the electrical panels, destroying some of them.

It is still unclear why the whole datacenter went down as – from the latest reports – only one generator / power line was impacted …

Update #3 : 18th November 2015 – I’ve received an important update from an anonymous source. While I’m usually trying to source all information I’m basing those analysis on, I found this one credible and interesting enough to be quoted here (with source’s agreement).

“- First of all, there was never a power loss in either of the four technical rooms (separated in different buildings btw).
– The power loss was only at the controller working position.
– The problem was that the power spike killed the static switches in the working position. Those switches can switch between the main and backup power supply.
– There was never a problem in the control tower and regional airports because the technical rooms were fully operational the whole time. “

So, during the conclusion of the initial analysis I raised some questions on why the whole data center went down due to this power spike.
The information above is now clarifying this point, as the data center itself never went down. Only the air traffic controllers workstations were impacted.

This is also a good reminder that the end-user workstations can be as critical as any other IT systems.

Thanks to the anonymous contributor for the clarification

Anyway, disaster happens, whatever level of redundancy you may have.  Which leads us to another important question :

 What about the Disaster Recovery ?

Above, we have seen that a 400 sqm separate room existed as a Disaster Recovery for the former CANAC (1) systems.
Is this still in use ? Is there a new one ? Is it located in the same building, powered by the same generator or UPS ?

Update #3  : Based on the source quoted above, the IT rooms themselves were not impacted, and are redundant between buildings

Even with a separate IT room, even in a separate building and with a dedicated power line : is there a user disaster recovery area, where Air Traffic Controllers can seat and connect to the Disaster Recovery systems ?

From the media statements, most of the systems were restarted after 5 hours.
A 4-hours RTO (Recovery Time Objective) is commonly used in corporations (including banks) for a major disaster (and a general power failure is a major disaster).
But general public (or corporate users) is becoming more and more impatient… More and more systems are active-active, but active-active comes with other challenges, and doesn’t means it will be foolproof either.

Some vendors claims to provide full redundancy for their systems, but what happens to the interconnection of redundant systems ? I’ve seen buildings with two separate network trunks, build at opposite corners of the construction, but landing in the same MDF (Main Distribution Frame) in the basement.

As a conclusion :
– Critical systems can fail, and they will do.
– Don’t assume you’ve a end-to-end redundancy.
– “Expect the best, plan for the worst, and prepare to be surprised.”

I don’t know the agreed RTO for an ATC like Belgocontrol, but being a regulated organisation there is a chance a detailed report will be made public (unlike TV5 Monde incident).

I will update the post if more details are published.

Update 02/06/2015 : generator not grounded

Update 08/06/2015 : generator not grounded since 2005 + electrical failure in one generator cooling subsystem

Update 18/11/2015 : only Air Traffic Controllers workstations were impacted by the surge (static switches were damaged). The data center(s) itself  were not impacted.

Pierre-Olivier Blu-Mocaer
FixSing Consulting


TV5Monde – A (tentative) technical analysis

11 Apr 15

As it may appear surprising that a TV station can be forced to stop broadcasting after having its website defaced and social network accounts controlled by some hackers, I’ve tried to collect publicly available technical information and improve my understanding of this interesting issue. Below you will find my own technical analysis of infrastructure components that may be in use at TV5Monde, I’m not an expert in TV Broadcasting, my main speciality being IT Infrastructure, however as we will see those two domains are becoming closer.

What is TV5 Monde (TV5 World in English)?

TV5 Monde is a global television network, editing and broadcasting French language contents, mostly taken from French-speaking countries (France Television for France, RTBF for Belgium, RTS from Switzerland, and Radio-Canada). TV5 also produces some contents, like news magazines. TV5 is headquartered in Paris (Avenue de Wagram).

Being a French national, working and living abroad for years, I’m a subscriber of TV5 Monde through my local TV provider (StarHub in Singapore). It is quite popular with French-speakers around the world, but of less interest for French national residing in France (mostly being an extract of programmes already available on French channels).

Attack summary

From the news reports, TV5 Monde started to lose control of their Twitter and Facebook accounts, then the tv5monde website was defaced. Shortly after that, the email system seems to have stopped working, then the TV broadcast system. This is not confirmed and only taken from public news sources. The video broadcast was stopped for approx. 3 hours. It was then restored for pre-recorded contents only.

In this investigation I will not focus on the Twitter / Facebook hack or website defacement, but will try to get a better understanding of the TV5Monde infrastructure used in general.
Let’s move on to the technical part. I will split this study in 2 parts. The first one will be looking at the “Corporate Network”, the day-to-day general network used by staff, and supporting emails, internet browsing, phones and so on.
The second part will be focusing on the “Broadcasting Network”, used by staff to produce (and post-produce) video casts, and stream them globally.

What can be remotely discovered on TV5 Monde IT Infrastructure, without directly performing any scan on their systems (which are still mostly down at this time)?

Corporate Infrastructure

This part is what I will call the “Corporate Infrastructure”, supporting users and general infrastructure (email, fileservers, and corporate applications).
Let’s begin with the DNS. Several DNS domains seems to be in use:,,,, and DNS are managed by Gandi, a well-known French provider, with a serious reputation. Gandi DNS provides dual-authentication for your DNS management portal. DNS are managed by another company, which I don’t personally know. I don’t know either which kind of security they provide to clients to administer their DNS.

The MX records are shown below. A MX record is a Mail Exchanger Record      MX   Preference: 10      MX   Preference: 20  MX        Preference: 5    MX        Preference: 10    MX        Preference: 20       MX        Preference: 10

Apart from, all the MX are pointing to – which is maybe a legacy domain (TV5 changed names several times in the past years).
From a Google search, the most frequent publicly communicated email domain seems to be, so using and as MX. and are still up, they are WatchGuard network security appliances, with their admin web interface externally responding on port 443(with an expired and incorrect certificate).

Using the IP’s of the MX we can now find the public ranges from the RIPE public database.

2 public Internet ranges owned by TV5 can be found: a /24 TV5-NET and a /28 FR-TV5-MONDE.   France      TV5-NET  TV5 Monde        Yes        Jean-Pierre VERINES       131, avenue de Wagram, 75017 Paris, France  +33 1 44 18 55 55      +33 1 44 18 49 38     RIPE NCC     France      FR-TV5-MONDE       TV5-MONDE        Yes  Vincent FLEURY       TV5 MONDE, FR-PARIS, France               + 33 1 44 18 55 23            RIPE NCC

A 3rd one, from Claranet (French ISP) range is not fully owned by TV5 (ISP range).   France      TYPHON  Typhon  Yes        Claranet Network Operations Center  18-20, rue du Faubourg du Temple, 75011 PARIS    +33 1 70 13 70 00     +33 1 70 13 70 01        RIPE NCC      

Open ports ?

We can now explore those network ranges with Shodan, an online search engine for Internet-connected devices.

A Shodan query on the TV5-NET network range reveals disturbing findings… (a query on the other range didn’t showed any results). It must be noted that the Shodan results are time stamped from a month ago. (you need a free account to see the results).

(Partial) screenshots of the Shodan results:



A number of TV5 Internal IT components were externally accessible recently (March 2015, before the network shutdown), like a Google Search Appliance, a PowerShell Remote Management Interface, a Dreambox (digital television receiver), a Mac Mini, a Remote Desktop to a Windows server, a SMB file share, and so on.

As shown in the screenshot, an interesting system found is Isilon. I will develop later on why this maybe important.

At this stage, it seems that there was a significant Internet exposure of TV5Monde internal IT systems, from the Corporate Network.

What else can be learned on their systems, from public sources?

A set of TV5 Monde Tenders / Technical Requests for Proposals (“Appels d’Offres” in French) are available online. Those documents are RFP for vendors, to bid on the management of TV5 Monde internal Infrastructure and were published in mid-2013.
Those documents are public due to a European/French law on Public Tenders, which TV5 Monde must comply with (even being a Private company if I’m correct). They can be read here (in French only):

From the RFPs, we can identify the following equipment as being used in 2013 on their internal network:

  • Checkpoint 4407 / Cisco ASA 5520 – Firewalls, VPN, maybe remote access , (the scope of usage is not specified in the RFP)
  • WatchGuard XTM810 – Probably the email front ends seen earlier
  • Nexus 7010 – Large core switches, probably connected to the SAN
  • 2 Cisco 6513 – Access Switches
  • IP Phones Mitel
  • Rancid – Network Monitoring
  • Cisco Fabric Interconnect – to the SAN
  • VMWare
  • Citrix
  • FTP
  • Backup + File Servers
  • 4D / FileMaker
  • Oracle (1 production cluster, 1 Backup VM)
  • SQL Server (3VM for IT , 2 active/passive clusters for production)
  • MySQL (11 standalone VM, 4 clusters)

A rather classical set of Infrastructure hardware in a medium-size corporation.
The Internet Browsing is not explicitly detailed, there is no mention of Reverse-Proxy or Web Application Firewall being used (which doesn’t means there is not).
I haven’t found any public information regarding which type of antivirus is in use internally on Desktops and/or servers. Same for the Patch Management of Windows and Mac environment.

All this is useful, but doesn’t help to understand the TV Broadcasting infrastructure.

Broadcasting Infrastructure

Through some online research, some of the equipment used (or still in use) can be identified, in what I will call the “Broadcasting Infrastructure”. The broadcasting is a long interconnected chain of processes, tools, and staff.
In Sept 2012, TV5 Monde renewed its outsourcing to Ericsson, “the biggest broadcast outsourcing contract in France”

Digital Broadcasting is heavily relying on a similar infrastructure than the traditional servers, but with significant differences in term of hardware and software, and connectivity. A very detailed technical article on the new TV5Monde Broadcasting centre can be found here (in French only, but the pictures are worth to check):

On top of the video contents produced internally at TV5Monde, incoming and outgoing video feeds will come and go from/to various providers (you can identify Arquiva on one of the screenshot, Arquiva is one communications company providing satellites accesses to media organisations. Most of those connectivity links should be private links (like Colt that we can identify on the screenshots of the article).


However TV5 Monde is also broadcasting on the Internet. It seems that they are using Tata Communication CDN, I don’t know how the content is pushed to them.

A set of different software is running on dozens of processing servers in the Broadcast Infrastructure. Such servers are used to manage the massive amount of video contents, their commercial rights, their encoding, their meta-data and so on.

Those processes are called MAM: Media Asset Management and PAM: Production Asset Management. The software used are a mix of commercial and in-house developments. iNews from Avid, Mosart from Vizrt, Louise and SGT (third-party development). A third-party development called SYGEPS is used as an Orchestrator, it’s a Tomcat + Oracle DB running on CentOS (source:

The production (mostly preparation of news reports) is done on 60 workstations (brand not specified). The post-production (tasks done on the content after the production itself) is done through 24 workstations (brand not specified, but maybe Mac). It is not specified where and to which Local Network those workstations are connected to. Are they running on the Corporate Network described earlier, or are they segregated?

From news video footage done during the incident in the TV5Monde premises, some of the staff in the office seems to have 2 desktops: 1 PC and 1 Mac.


WeTransfer ?

An interesting TV5 Monde staff interview (in French again) is available here:

The TV5 Monde staff interviewed is explaining that he was preparing a newscast when the attack started. My translation: “I was expecting files from Gabon (Africa), and those files were not arriving. I was with the sender over the phone, as he was sending me the files through email – in fact it was simply a link to WeTransfer. And the sender kept telling me that the email was sent but nothing was coming through on my side”.

What’s interesting here is not that the email was not arriving or the download was not starting, which may be due to various reasons (linked to the attack or not), but the use of WeTransfer for preparing a newscast.
WeTransfer being a public cloud-based file-sharing service, does that means that the Production Workstations have direct access to Internet, as well as being on the same LAN than the Broadcast servers? Or the files are downloaded on the Corporate Network then safely transferred to the Production side?

For those who wants to explore those MAM/PAM software further, interesting readings can be found here (French) and there
Overall, there is a long list of servers and software used in the production and post-production chain.

However, the focus of our interest is to better understand how the broadcast itself was stopped, during 3 hours.

Windows 7 for the Transmission Server ?

One of the component repeatedly mentioned by media during the crisis was the “Transmission server”. I’m quoting journalists and one TV5 Staff saying on TV: “The Transmission Server was hacked”. Obviously, this is far from being fully confirmed. However, we can find reference to what maybe a transmission server potentially in use at TV5Monde here: and there

The Nexio Volt (from Harris Broadcast) is a media server able to stream multiple feeds in SD and HD, it’s a 1U rack server, running on Windows 7 x64 Ultimate Edition. It is not specified if any hardening is done on the OS by default.
I don’t know the technical rationale behind this OS choice for a server, maybe due to graphics drivers’ availability for the video processing cards.


Nexio Volt specification details can be found here:

Connected to the Volt, the Farad is a SAN Storage system designed for broadcast and production facilities.

Another system which seems in use in TV5 Monde Broadcasting department is Pixel Power ChannelMaster:

You can find more details on the usage of a ChannelMaster here:


The Production SAN exposed on Internet ?

I haven’t been able to find the OS used or detailed tech specs behind ChannelMaster. However in the same article, we can find the following information:

“TV5 Monde has purchased Pixel Power’s ChannelMaster no compromise integrated playout technology. It has also installed Gallium, Pixel Power’s integrated, sophisticated and scalable scheduling, asset management and automation system”
“Gallium is also integrated with the broadcaster’s secondary storage – IBM and Isilon – to manage media transfer to the ChannelMaster local playout cache”

Isilon… Which we have seen earlier in the Shodan report, with an FTP exposed on Internet. Isilon is an EMC SAN storage system, specially designed to cope with video broadcasting storage constraints.

TV5 owns at least one,

This press release is showing a simplified diagram of where the Isilon seats in the Broadcasting chain.

Does this Isilon the same server than the one appearing in the Shodan report? I can’t confirm this, but with the probable high-cost of such equipment, it doesn’t seems economically sound to use an Isilon just as an external FTP server…

No Antivirus ?

Volicon Observer is another system part of the Broadcasting infrastructure.



An old admin guide manual can be found here (the user manuals requires a client account)

The client used for Volicon Observer can be any Web browser, where the Volicon Overser Web server seems to be using PHP (from the Admin guide screenshots page 17). The OS used seems to be Windows due to references to C:\ for file paths on the server (but I don’t know if this is a Server or Workstation edition). Page 94 provides an overview of services running on the box


An interesting remark found in the admin guide, section “What not to do on the server side”:

 “Do not install Antivirus software until checking with the Volicon Support group. In addition see the Antivirus Excluded Storage Areas / Services to Scan”.

There is probably more similar systems in the Broadcasting infrastructure.

I don’t know if the Volt is the famous “Transmission Server” currently in use at TV5 Monde.
I can’t tell if this is the last server used before sending the content outside of TV5, and I can’t confirm if this was the system which went down. And last but not least, no information on the internal segregation between the Corporate and the Broadcasting infrastructure have been published.

But as detailed earlier, some of the broadcasting infrastructure components are third-party specialized hardware running Windows OS editions.
As a result, the same security constraints than on any Windows desktop or server will have to be taken in account (antivirus, anti-malware, patching, permissions, credentials, Internet browsing protection, etc).

It’s unfortunately not uncommon for such critical “vendor blackbox” to be managed as an independent third-party system, when – at then end – it’s only a regular Windows system running on the LAN. Maintenance and patching being left to the vendor’s responsibility (and rarely done at the same frequency than the official patch releases)… This doesn’t means that was the case at TV5 Monde but this remains a possibility…


From an external point of view, only using publicly available information, some questions are open:

  • Why a number of IT services were available on Internet, increasing the surface attack on the Corporate Infrastructure?
  • Which kind of Internet browsing security and desktop protection (Mac and PC) was provided to the staff?
  • Was the Corporate network segregated with the Broadcasting network? How?
  • How the specialised equipment used in Broadcasting (proprietary hardware but running commercial standard OS for some of them) were managed (patching, antivirus, credentials)?

Obviously, investigations are still at a very early stage. Various unconfirmed rumours on Internet are already spreading (VBScript virus). The IT staff, the vendors and the French government are working on it, and I hope we will see a detailed technical report in the near future.

Finally, this is one more reminder of the importance of some basic security principles in corporate environment:

  • Reduce Internet exposure to the minimum
  • Secure all exposed systems (SFTP and co.)
  • Use DMZ & Bastions servers
  • Install, configure, maintain and monitor IDS / IPS
  • Apply patches & anti-virus, even on third-party systems
  • Perform vulnerability scanning and penetration tests
  • Educate users

Thanks – Pierre-Olivier Blu-Mocaer – FixSing Consulting

Update :

12/04/2015 : (French online journalism platform) mentioned this analysis here :
12/04/2015 : Added IDS / IPS recommendation ( Thanks @bluetouff )
14/04/2015 : LeMagIT (French IT magazine) mentioned this analysis here :
15/04/2015 : Analysis mentioned in :

14/05/2015 : One month after the attack, still no detailed report available. However, the latest Shodan’ scan shows that a large cleanup has been performed on the external firewall(s) rules.