Andre Rodier’s photo

André Rodier

Developer / Linux Administrator

☎️ Phone +44 7511 244 961
📧 Email
💬 Jabber andre@rodier.me
🌐 Site https://rodier.me
🗣️ Speaking English & French
🗺️ Work location UK & Europe
📇 vCard Contact details

September 2019Pirum Ltd

Senior system administrator

Pirum is a UK-based financial services technology vendor, specialised in securities lending.

  • Build an Ansible based framework to deploy, maintain and decomission physical, virtual and cloud systems.
  • Lead a small team for data-centre bootstrapping, deployment and provisioning.
  • Design and implement custom monitoring and alerting solutions using Grafana and Prometheus.
  • Design and implement internal and external Firewalls, DNS and routing logic using infrastructure as code.
  • Build a VPN-based remote working solution with multi-factors authentication and Windows clients.

Ansible based deployment framework

Automatic system installation

Designed and implemented an Ansible-based framework that automated the full lifecycle of Debian systems across physical, virtual, and cloud environments, improving deployment consistency and reducing manual provisioning effort.

First, the framework allowed entire deployment of new Debian systems from scratch, hardware or virtual, without any user input. The whole system definition, as well as disks layouts were fully defined using simple YAML configuration.

A completely offline installation on physical systems was also automated. By using the servers management interface, to upload a custom Debian iso image and starts the system installation remotely. The whole process was non-interactive.

Systems provisioning

The framework organised a clear separation between the infrastructure definition and our ansible roles library. With the infrastructure defined in one place, for physical servers, virtual machines and cloud instances.

The ansible roles were defined in a different repository, totally decoupled from the infrastructure information, enforcing a genericity to any target type (hardware, virtual machine or cloud instance.).

From the command line, we were able to run one or multiple roles, on one or multiple systems. The framework allowed installation, uninstallation, upgrade as well as dry-run, testing and reporting.

Credentials encryption

All the credentials were stored in a separate repository, using GPG encryption amongst the team. This allowed smooth integration with other repositories, granular access control, and better credentials readability than Ansible Vault.

An abstraction layer was used, which allowed the use of different credentials storing system per environment, without affecting the roles logic. For instance, on-premises used locally GPG encrypted passwords (pass), and AWS environment could use SSM.

Custom reports

For auditing or reporting, custom PDF reports were generated and sent by email or stored on a Samba server. Reports included potential changes or current state, with various level of details and formatting.

Technologies
  • Ansible, Ansible linting, Yaml linting and schema validation.
  • Custom ansible dynamic inventory in Python.
  • GPG with custom scripts for onboarding / expired / revoked keys.
  • Credentials storage: pass password store.
  • Shellchek and Python linting in Git hooks.
  • Operating systems: Mostly Debian, and some CentOS nodes.
  • Jenkins and Gitlab for continuous integration.

Datacentres deployment

Bootstrapping

Using the framework above, we have deployed two datacentres, starting by the external and internal firewalls and the hypervisors, finishing with the virtual machines.

Security

For each datacentre, we used redundant external firewalls, connection tracking synchronisation, dynamic routing protocol (BGP) and transparent proxies with traffic interception and whitelisted domains.

Sensitive traffic that could be used for information leaking, like DNS and NTP, were intercepted and redirected to the internal servers.

Finally, Suricata IDS was used, with evebox for visualisation and custom Prometheus metrics.

Client whitelisting

As the legacy systems were using client IP addresses whitelisting, developed the entire system allowing the applications to update the firewalls whitelist in a secure manner. All the whitelists were synchronised and monitored using Prometheus metrics and OpsGenie alerts.

Authentication

For the servers access, users and groups authentication and authorization was linked to Office 365, using the LDAP protocol. The SSH keys were directly stored in the directory.

Monitoring

Built datacentre-wide monitoring and alerting with Prometheus and Grafana, increasing visibility across critical infrastructure and strengthening response to operational and security events.

Configured and tuned sensitive services, under AppArmor security profiles, and set-up OpsGenie alerts on security violations events.

Both internal and external firewalls successfully passed regular penetration tests and security audits. PDF monthly reports were generated with network rules and proxy accesses.

On the most sensitive systems, a small agent was installed to upload the logs onto AWS, using the S3 storage system.

Technologies
  • Mostly Ansible, and custom modules or shell scripts.
  • Operating systems: Debian.
  • Debian preseed with simple YAML configuration, Jinja templating.
  • Prometheus blackbox exporter with custom metrics written in Python or Shell.
  • Authentication and authorisation using RedHat SSSD.
  • Webs site fitlering with Squid whitelisting, using SNI only (no https inspection).
  • Custom OpsGenie alerts using OpsGenie API
  • Internal DNS servers with Unbound for caching, and PowerDNS with customized replication and SQLite storage backend.
  • Client whitelisting with nftables, chroot’ed SFTP server, and inotify kernel daemon.
  • Reporting: Pandoc and custom LaTeX code for PDF export, and Samba / postfix.

Remote working solution

For the remote workers, deployed an integrated VPN solution, using both on-premise and cloud instances.

Four redundant access points, with low level packets authentication, and multiple factors authentication for clients.

Onboarding directly used the information from on premise Active Directory servers, to send the instructions and the second factor authentication by email.

Technologies
  • Servers: Debian on Linode on op premise.
  • Wireguard for site-to-site VPN between on-premise and cloud servers.
  • Windows VPN Client: OpenVPN, with multi-factor authentication.
  • Deployment: Ansible for both Linux and Windows systems, with WimRM authentication for the latter.

December 2016June 2019 SeeQuestor

Senior system administrator

SeeQuestor was providing to British and international police forces a complete platform for video conversion, ingestion, and analysis. The platform includes motion detection, face recognition, re-identification, etc.

  • Prepared and obtain Cyber Essentials certification for the whole team.
  • Design and implement secure remote working VPN for two offices, with firewalls
  • Build a fraework to deploy the entire hardware appliance, hosting multiple operating systems (Linux, BSD, Windows), SAN storage and nVidia CUDA hardware.
  • Design and implement a cloud version using Amazon Web Services GovCloud.

Cyber Essential Plus certification

To work with national crime agencies in UK, SeeQuestor needed to be certified with strong security credentials, starting with Cyber Essential and Cyber Essential plus certifications.

I designed and deployed the network infrastructure to optimise the working environment and be compliant to Cyber Essentials.

I also detailed the practices for the developers, the managers, and the directors to follow in order to pass the security audit.

Furthermore, I conducted some basic penetration tests before the official audit company.

Technologies
  • Firewall: OpnSense.
  • Network segregation using VLANs.
  • VPN server for remote workers: OpenVPN.
  • Site to site VPN for the other offices: IPSec.

Hardware and Software Installation

Designed and implemented the whole hardware and operating system stack, necessary to provide these services, migrating a single system prototype to a rack self contained system.

The system was included in a full rack, and included high speed SAN storage for video footage, processing servers based on Ubuntu, and a Debian based firewall. Files were stored using file level encryption and exposed through Samba to the Windows-based virtual and physical systems, for proprietary format analysis and screen scraping.

Remote access to the system was implemented by two OpenVPN based servers, one for administrators and one for customers, both using two-factor authentication.

The BSD based firewall filtered ingress, forward and egress traffic, and also included web site whitelist access.

The whole system was provisioned using infrastructure as code, allowing automated deployment and bootstrap of the system.

Deployment
  • Operating systems: Debian, Ubuntu LTS and Windows 10.
  • Ansible, and Debian preseed.
  • Virtualisation: KVM and libvirt.
  • Monitoring: Zabbix.
  • Encryption: eCryptfs.
  • pfSense firewall.
  • Squid with whitelist model.
  • IDS: Snort.
  • OpenVPN with TLS packets authentication.

Amazon web services hosting

I used the same Ansible framework to deploy the platform on AWS, with little or no modifications. The platform was behind a load balancer and this time publicly accessible. Various strategies were employed to secure the platform, like AWS security groups and NACLs.

Servers and Software
  • Security: VPC, Security Groups, NACLs.
  • Deployment: Ansible with custom Python modules and plugins.
  • Remote access: OpenVPN.

Platform encryption prototype

I also implemented a platform self decryption on boot, to protect both intellectual property and unauthorised access. The platform was protected by a randomly generated code generated offline, with a mandatory online validation method.

Technologies
  • OS: Ubuntu Linux 16.04.
  • Languages: Go, shell script.

March 2015November 2016 Bulb Software

DevOps lead / full-stack developer

Bulbthings developed an asset management and life cycle application.

  • Build the entire stack on top of Google Compute Engine, using repeatable and idempotent infrastructure as code.
  • Designed the assets database for the application, using advanced PostgreSQL features, like hierarchical queries, JSON schema and materialised views.
  • Created the development environment with continuous integration, unit and integration tests, and code metrics reporting like code coverage and cyclomatic complexity analysis.
  • Integrated automatic front-end testing with Selenium, playing complex scenario with screen capture and saving for the front-end developers.

Hardware and Software Installation

Bulbthings developed an asset management application, to handle their life cycle efficiently, and reduce the costs. Any type of asset was possible, like cars, printers, phones, etc.

The application targeted companies of any size, from small and medium-sized companies to corporations with international branches.

The initial application was a standard architecture, using a MySQL database, a PHP/Symfony backend with as a REST server, and an AngularJS/Metronic front end. The backend was complemented by other programs for HTML web sockets, in Go.

I took part in rewriting the whole application stack, with robust continuous integration techniques and bottom-up vertical development. Used the advanced PostgreSQL database features, like querying hierarchical data trees, dynamic schema updates, as well as geolocation and distance calculation.

For the back-end, we used server side JavaScript (NodeJS), with a REST API framework and web socket implementation out of the box.

The whole stack was thoroughly checked using a Continuous integration system, at many levels:

  • For the backend: unit tests and integration tests with coverage analysis.
  • Asynchronous testing of the Rest API on each commit.
  • For the front-end: automated acceptance tests using two real web browsers and automatic screenshots.
Technologies

Backend

  • PHP Phalcon framework prototype.
  • Final rewriting using NodeJS 4, Restify framework, background worker processes and event handlers.
  • Unit tests: Mocha, ShouldJS,
  • Code coverage analysis: Istanbul.
  • Database: PostgreSQL and contrib modules, materialised views, cross requests, and common table expressions.

Frontend

Continuous Integration

Google Compute Engine

The instances were automatically deployed by Jenkins on the GCE, using Ansible playbooks.

Each instance was automatically deployed on a subdomain and instantly accessible, without having to wait for DNS propagation.

We were using wildcard certificate to ensure simple HTTPS support, as well as http version 2 to reduce the load.

We also used four front-ends to distribute the load using DNS round-robin.

Technologies
  • Google Compute Engine
  • Bash and shellcheck.
  • Ansible with playbook debugger.
  • Nginx with dynamic virtual hosts and http/2
  • Wildcard certificate
  • Jenkins and Ansible playbooks
  • Slack integration for C.I. errors
  • Postfix for email reporting

September 2011March 2015 Indiefield

Developer / System administrator

Indiefield is a company specialised in market research fieldwork. In 2011, there was a strong desire to move away their infrastructure from Microsoft solutions, in favour of open-source software.

  • Replaced unmaintained proprieatary directory server with replicated Samba software with and gradually connected all the internal services for the staff.
  • Created a Linux based virtualisation environment for the continuus integration servers, each deployment stage.
  • Build and host highly customizable software, for the managers to create advanced surveys without requiring technical knowledge. Automatic reporting and payment were included.
  • Build the software that managed phone call records: compression, archiving and emailing.
  • Designed a secure call centre using Linux based workstations with daily automatic reinitialisation.

Modernisation of legacy environments

After working with a more and more Microsoft oriented environment, I have chosen this position as a “return to open-source fundamentals”. There was in this company a real desire to leave Microsoft technologies for less vendor lock-in solutions. My experience in both domains has been a decisive criterion for this role. I have managed to migrate all the services to Ubuntu and Debian based environments.

It was a challenging position, with a huge workload. The environment itself was built upon a ten years old structure, using mixed open source and proprietary technologies.

All these applications have been gradually replaced by modern LAMP stacks using agile methodologies, continuous integration and unit testing.

I have first installed a replicated and shared authentication system, so everyone had to remember only one password. I have then replaced AD by a Samba server, and activated roaming profiles.

The Exchange server has been replaced by three email servers hosted on two different internet connections. Emails were digitally signed and certified. A dual internal / external Jabber server was also included.

I have also deployed solutions allowing remote workers to transparently access all the services from home, like a VPN and a private cloud.

Finally, I have replaced in-house developed intranet with a secure and modern wiki.

Technologies
  • Virtualisers: KVM on Ubuntu LTS.
  • Real time master/slave replication of DNS and LDAP servers (bind and OpenLDAP)
  • Replication of MySQL and PostgreSQL
  • Shared storage: NFSv4 via Kerberos.
  • Servers: Postfix, Dovecot, Davical.
  • Antispam: SpamAssassin, Amavisd, ClamAV.
  • Samba 3.6, with synchronised P.D.C. and BDC.
  • Redundant OpenVPN servers.
  • Private cloud: OwnCloud
  • Wiki: xwiki with LDAP authentication.

Consumer directory migration

The “Consumer Directory” is a web site containing a large base of consumers, who can register themselves. Once registered, they receive invitations to take part in surveys, after filling screeners.

I have used a Drupal as a content management framework to build a feature rich, modern version, and migrated transparently all the members. All their details have been validated and reformatted, and their initial password kept.

One of the prominent features was geolocation, allowing a simpler search than just by post codes.

The system has been customised to do mass messaging and automatic bounces management. All emails sent were digitally signed with DKIM and SPF, reducing false classification as SPAM.

Finally, all this stack was checked using continuous integration tests in a real web browser.

Technologies
  • Google Compute Engine
  • Bash and shellcheck.
  • Ansible with playbook debugger.
  • Nginx with http/2
  • Wildcard certificate
  • Jenkins
  • Slack integration
  • Postfix for email reporting

Online recruiters and fieldworkers payment system

The company was initially using a Microsoft Access database to store their recruiters details, and was using paper mail to send invoices.

Using the same C.M.F., I have created an online system, imported the fieldworkers database, and added some useful features:

  • Fieldworkers were able to update their details and upload their ID themselves.
  • Emails alerts were automatically sent as soon as new jobs were posted.
  • Project managers were able to search fieldworkers using various criteria, like domains of expertise, geolocation or personal data.
  • Recruiters were able to submit their invoice directly to the system.
  • The invoicing process was integrated into a workflow, involving project managers and finance department.
  • Each action was logged and emails were sent on specific events.
  • Project managers were able to rate recruiters or log misbehaviour. This was then accessible by the rest of the team.
  • Statistics and Financial reports by month, recruiter, project, etc. with email alerts on budget overrun.

The authentication system was transparently integrated with our internal directory server, allowing the project managers to authenticate with their normal password.

Technologies
  • Drupal 7 with intensive CCK and views usage.
  • Profile 2, migrate and migrate UI modules.
  • Continuous Integration: Jenkins and Selenium.
  • Email system: Postfix, dovecot, OpenDKIM.

Main company web site

To build the new web site, I have used the same framework, but this time as a Content Management System. With a customised responsive theme for mobile devices, the same information was presented differently on phones, tablets and screens.

Using a CMS allowed some departments to add content themselves, which was then validated and aggregated to create new pages. The web site was connected with an analytic system, with weekly and monthly reports sent by email to client services and directors.

I have also integrated a real time chat system, connected to the client services department.

Some custom modules have been developed, for instance to handle mailing lists unsubscriptions from clients.

Technologies
  • Main web site: Drupal 7
  • Analytic: Piwik and Drupal Piwik integration
  • Web chat: livehelperchat
  • Continuous Integration: Jenkins and PHPUnit

Interviews recording system

Phone interviews from the call centre were manually archived and processed by an operator every day. I have replaced this by a set of Perl scripts thoroughly tested. Compared to manual processing, this was faster and less prone to errors.

The script was able to combine related records together and encode them in different formats, according to client's preferences.

The system was also able to upload records on client's web sites via FTP or to archive them on blu-ray discs at the end of each month.

Finally, an email was generated every night with a summary of events.

Technologies
  • Perl and SimpleTest test and mock framework.
  • Perl FTP and Mime Mail
  • Perl::DBI for MySQL
  • Avaya and Asterisk phone systems.
  • Encoding formats: Speex, MP3 and OGG/Vorbis

Vehicles Registration Research

This web site has been developed to query national vehicle registration services in UK, to obtain all details from a plate number. The information returned was the make and model, the power, etc.

Technologies

February 2008September 2011Red2

Senior developer and system administrator

Red2 was an British B2B start-up, providing services to insurance, travelling or bidding companies.

  • Build a resilient working environment for the team, using Open Source software.
  • Create virualisation environments for the Windows virtual machines used both in development and live ssytems.
  • Build Email and collaboration systems servers for the whole team, with shared resources like calendars and address books.
  • Introduced Agile methodologies, BTS software and a continuous integration system.
  • Installed an advanced wiki, with dynamic updated linked to our code base and continuous integration servers.
  • Implemented monitoring and alerts using open source IDS

Working environment foundations

The first task in Red2 has been to build the foundations for a proper working environment. I have minimised single points of failure, by using replication on most important services.

I have started by installing two virtualisers for Windows and Linux guests, with disks snapshots and archiving.

I have also installed a VPN server, with two DNS servers and custom domain names for internal development and continuous integration.

An entirely open source email servers has been built to deliver emails with advanced features: Shared and public folders, fine grained ACL and an extremely powerful mail filtering system (sieve).

The system has been extended with a shared calendar and address book, using well established standards (caldav and carddav).

The webmail offered access to all settings, like ACL on folders and the Anti-spam policy rules.

A powerful and extensible wiki software and mash-up platform for our internal procedures and documentation.

Finally, I have installed two windows domain controllers, with roaming profiles stored on a shared NAS.

Obviously, the authentication to all these services uses redundant credentials’ management servers, with multiple protocols handled.

Technologies
  • Virtualisers: kvm on Ubuntu LTS.
  • Real time master/slave replication of DNS and LDAP servers (bind and OpenLDAP).
  • Replication of MySQL and PostgreSQL.
  • Shared storage: NFSv4 and Kerberos authentication.
  • Servers: Postfix, Dovecot, Davical.
  • SpamAssassin, DSPAM, ClamAV, fail2ban.
  • Samba 3.5, with synchronised PDC and BDC on the two LDAP servers.
  • Mindtouch dekiwiki, with custom modules.
  • OpenVPN, with custom scripts.
  • Firewall: iptables scripting and ufw.
  • Monitoring: Snort IDS, fwanalog and NTop.

Continuous integration environment

The second step has been to build a proper continuous environment, in accordance to Agile methodologies. I have used the technologies detailed below:

  • Source control servers with replication and scheduled backup, linked with a modern ticket tracking system integrated in Visual Studio.
  • Continuous integration server, with a build pipeline, source control polling and emails notifications on major build events.
  • Deployment on custom domain names, for each stage of the development (Latest builds, IAT, UAT, Staging, etc.).
  • Acceptance tests server, with scheduled tests and email notifications. Tests cases can be written in several languages and drive all major browsers on Windows and Linux.
Technologies
  • Subversion and WebSVN, slave replication and rsync server.
  • Redmine with custom plugins.
  • Automatic and manual deployment from Jenkins continuous integration server, using the build pipeline plugin.
  • Email alerts to developers, in public folders for each project.
  • Selenium server, with automated tests in C#.
  • MSBuild with custom tasks developed in C#.

Hewitt auction project

For this project, I have participated with the team to the design of an extensible auction platform, using distributed web services.

We have used the agile methodologies to create user stories from the requirements, and we have continuously adapted the stories to the time lines.

We have also used modern approaches and techniques in our development. For instance, both the validations rules and the data models were transparently shared by the web services and the user interface projects. This point drastically improved the development time and the simplicity of our development model.

Technologies
  • Asp.net, MVC v3, C# 4.0, Entity Framework.
  • Jquery, Ajax and Comet.
  • Asp.net web services.
  • Xunit test framework.
  • Ninject.
  • msbuild custom scripts.
  • Subversion with "external" repositories.

Salvage Direct auction project

Salvage Direct was a salvage auto-mobile auction site that enabled registered and verified users to bid and sell salvaged vehicles.

I have added the comet technology to push bid events in browsers in real time.

I have also developed a separate image server to host all photos from the auction web sites, with advanced features:

  • Automatic thumbnail creation on upload.
  • Automatic zip files decompressing on upload.
  • Gracefully handle non-existent images.
  • Secured FTP access for customers.
  • Simple REST API to list images.
Technologies
  • Asp.net, MVC v2, C# 3.4.
  • Jquery and dojo libraries.
  • Ajax and Comet technologies.
  • AspComet library.

Image server

  • Debian Squeeze, with inotify scripts and VsFTPD FTP server.
  • Perl and GraphicsMagick for image management.
  • Apache and PHP for Rest API and image serving.

Ediamond project

For this auction project, I have worked with the development team to embed a modern administration interface inside the web browser, and to optimise the execution speed of complex JavaScript code.

The administration interface allowed to monitor the transactions and communicate with buyers in real time.

I have intensively used both Ajax and comet technologies, pushed to their limits.

I have also deployed a load balancer in front of the web servers, to dispatch the requests.

Technologies
  • Jquery and dojo JavaScript libraries.
  • Ajax and Comet technologies.
  • AspComet library.
  • Load balancer: nginx.

National Museums Social Networking Project

The Creative Space project is a social networking application that allows users to search across nine museums and gallery collections.

I have implemented their distributed architecture on nine heterogeneous servers, to allow communication in real time.

The whole system is built on top of Drupal, with both standard and custom modules, specifically developed for the project.

Each instance uses remote procedure calls for exchanging and synchronising content, with automatic detection of offline instances, and off-line replication.

The content exchanged by the instances is encrypted using latest standards for data confidentiality.

Users accounts are replicated and synchronised in real time on all instances, and single sign-on implemented, to allow logged in users to transparently navigate on the whole system.

Technologies
  • Drupal CMS (v5) on PHP, with standard modules, for instance CCK, buddies lists, taxonomy, upload, throttle and watchdog.
  • Custom drupal modules development, using XML RPC, encryption, open search, and distributed RSS feeds generation.
  • Deployment and customisation on various Operating systems, like RedHat Linux, Novell SLES, CentOS, Solaris, Debian Etch and Windows server 2003.

Web crawling system for Dun & Bradstreet UK

Designing and implementing a custom multi-threaded crawler engine, to extract financial companies’ contact information. Featuring:

  • Extracting and automatic validation of emails, phone numbers (UK & International rules) and UK postal addresses.
  • Embedded JavaScript interpreter, for complete extraction.
  • Anonymous proxies and multithreading for querying public search engines.

The usage of anonymous proxies for querying the search engines is legal, and had been necessary to avoid IP ban from search engines.

Technologies
  • Crawler : Various perl modules (esp. WWW::Mechanize) on Debian Linux.
  • Analyser : C# 3.5 on Windows 2003.
  • Storage : MySQL on Debian.
  • Proxies : Privoxy/Tor & tinyproxy.
  • Embedded JavaScript : perl/gecko engine.

Stock exchange and financial data extraction for Dun & Bradstreet UK

Using an internal Dun & Bradstreet crawling technology to collect and analyse International Financial Regulators and Stock Exchange data from more than 40 Countries.

  • Web crawling to regulators web sites.
  • Collecting and Analysing data persistence.
  • Aggregating data into a centralized database.

Designing a full replacement using an advanced scripting language.

Details

Old system

  • Microsoft Excel vbscript / web queries.
  • Microsoft Access, vbscript.
  • PDF Extraction Software (Able2Extract).

Replacement

  • Perl mechanize module.
  • pdf2text.

Insurance web site engine, J.L.T Insurance

Designing and implementing a new insurance web site engine, based on XML complex forms definitions, with both client and server side validation JLT Insurance.

  • Creating an abstraction layer on top of the old pricing engine, and exporting it as a web service.
  • Designing and implementing the migration from the initial relational database approach to a hybrid approach, using both relational and document oriented database (XML).
Technologies
  • C# 3.5, using Visual Studio 2008.
  • ASP.NET, with the MVC framework.
  • Massive usage of Linq from XML/CSV/SQL.
  • Microsoft SQL Server 2005 and 2008, with stored procedures.
  • Many Javascript libraries: ExtJS, jQuery and Scriptaculous.
  • Integrating Ext toolkit and jQuery UI on the new web site.
  • Using both REST and JSon for the web services.

September 2006November 2007Legal & General

Analyst Programmer

  • Designing and Building a modern front-end application providing Financial Management Information to the senior management and board for Legal & General. The system was fed with data from a Business Object Universe.
  • Creating the prototype for the new version of the Business Reporting Application, as a proof of concept based on emergent technologies.
  • Designing and implementing the user interface for the internal Balanced Scorecard project.

Business reporting application

  • Designing and Building a modern front-end application providing Financial Management Information to the senior management and board for Legal & General. The system was fed with data from a Business Object Universe.
  • Creating the prototype for the new version of the Business Reporting Application, as a proof of concept based on emergent technologies.
  • Designing and implementing the user interface for the internal Balanced Scorecard project.
Technologies
  • Reporting application : Using PHP JavaScript, and VML on IE.
  • Balanced scorecard : Scriptaculous javascript library.
  • Prototype : built on top of the Zend MVC framework and the Google chart tools.

20002004 Sinfo sole trader in France

Director

In 2000, I created a sole trader company in France, to provide specialised development and services to companies in my country and United Kingdom.

Below are some projects I worked on during these years.

  • Creation of an IFRS forum on a private extranet (International Financial Reporting Standard).
  • Creation of the Legal & General Running Club Web site, with shared forum authentication.
  • Creation of Tiaret, a north Africa community web site, with a forum, Wiki, and updatable dynamic content.
  • Creation of Sollidays, a property rent website, with various features, like administration interface, PDF booking forms generation with rates, real time availabilities updates, etc.

Various developments for Perseus Management Consultancy

  • Creation of an IFRS forum on a private extranet (International Financial Reporting Standard).
  • Creation of the Legal & General Running Club Web site, with a gallery and a forum.
  • Creation of Bildo open source project, a non obtrusive web component to embed photos on a web site, used by my clients.
  • Creation of Sollidays, a property rent website, with various features, like administration interface, PDF booking forms generation with rates, real time availabilities updates, etc.
Technologies
  • Bildo web component : PHP5, and Postlet Java applet for mass upload an photo resizing.
  • Sollidays : PHP fpdf library, and Gimp for the graphical design.
  • L&G Running club : custom php/mysql script to synchronise passwords with phpBB forum.
  • IFRS : Using the very complete MODx Content Management System.

Shipping service system, Frankal, Perpignan

Creation of a full featured program, from package delivery to customer invoices management.

  • Automatic data import from partner websites.
  • Delivery points visualisation on a graphical map.
  • Delivery slip generation, with EAN barcodes.
  • Customer address checking, using yellow pages.
  • Personal invoice processing system.
  • Customer invoice generation.
Technologies
  • LAMP architecture on Debian
  • PHP TCPDF library for barcode generation
  • Graphical map generation using GeoName database
  • Perl/Mechanize for Yellow pages crawling
  • Perl DBI from CSV and XML files

19961999High Tech Systems

C/C++ Analyst Programmer

Now known as Nexeya, HTS was a medium sized French company, specialised in industrial engineering and real-time data acquisition systems.

  • Wrote and maintained embedde software on VxWorks, using C.
  • Ported the legacy Motif based software base, to a dynamic front-end application on Windows NT, using Microsoft Foundation Classes and C++

Data acquisition on embedded firmware

This company was working with the French defence and aerospace industry.

I started by working on advanced embedded systems, programming in C using VxWorks.

I also developped command line frontends in C on Unix AS400 workstations, and ni C++ on Windows workstations.

I believe to have acquired substantial Low-Level and real-time Programming skills.

Technologies
  • C and C++ programming, on Unix AS400 and Linux systems.
  • C programming on data acquisition cards.
  • C and C++ programming, using Microsoft Visual Studio.
  • C++ on Windows NT/95.
  • C and Java programming on embedded systems VxWorks, using WindRiver cross compilation tools.

19982000 Education: BTS Informatique Industrielle

This is an Advanced Technician Certificate in Industrial IT / Embedded Systems Programming, equivalent to a Higher National Diploma (HND), level 5.
This 2-year post-baccalaureate vocational qualification focused on industrial computing, embedded systems, and programming.

As the training was highly focused on low-level programming on industrial environments, it has been a revelation for my actual occupation. Even if the course has been passed in 1998, I still uses today the strong knowledge acquired in both high level analyse and low level programming.

Theories

  • Methods of Analysis and design
  • Information Technology
  • Information Systemds and Databases
  • Programming Languages
  • Software Engineering
  • Networks and distributed systems
  • Artificial Intelligence
  • General knowledge and Business

Practices

  • Modeling languages : UML and OMT (Object-modeling technique).
  • Learning Assembly Languages for Motorola 68k and Intel x86 processors family.
  • Learning and practice of ANSI C in POSIX environments ( Unix & OS9 )
  • C & C++ programming on Windows, using the Microsoft Win32 API.
  • Advanced C++ concepts : templates, inheritance, polymorphism, stl, etc.
  • Theory and practice of additionnal languages, Lisp, Forth, Pascal & Java.
  • Stack based languages and virtual machines; Reverse Polish Notation.
  • Learning Unix and OS-9 operating systems in industrial environments.
  • Low level networking norms : TCP/IP, Token Ring, Netware, etc.
  • Artificial Intelligence, the Stuttgart Neural Network Simulator.

Technical Summary

  • DevOps and continuous integration: Ansible / Terraform, GitLab / Jenkins
  • Mastered operating Systems: Debian / Ubuntu / RedHat / OpnSense
  • Cloud and virtualisation: Vultr / Linode, AWS / Google Cloud
  • Networking: nftables, WireGuard / OpenVPN, BGP
  • Security: nftables, Suricata, AppArmor / SELinux
  • Network services: Nginx / Apache, PowerDNS / Unbound, Samba4 / NFS4
  • Email and other: Postfix / Dovecot / SPF, DKIM & DMARC, XMPP, CalDAV / CardDAV
  • Databases and storage: PostgreSQL, MySQL / SQLite / MongoDB & Redis / InfluxDB
  • Monitoring and alerting: Prometheus & Grafana, Zabbix, Nagios
  • Programming: Shell, Python / Perl, C / C++ / C#, JavaScript / TypeScript, Golang / Rust

Other information

Various

  • Graduated a two-year course, covering theory and practice on maintenance, diagnosis, and repair of audio and video electronic equipment.
  • Proficient in LaTeX, and various related open source software.
  • Creator and user of self-hosting solution HomeBox.
  • Languages spoken : fluent French and English, and basic Spanish.
  • Driving licence for cars, motorbikes and motor boats.

Aims

  • Keep myself up to date with low-level computer technologies and languages.
  • Carry on contributing to open source software, personally and professionally.
  • Move to a role with responsibilities, architectural decision, and project management.
  • Obtain I.T. certifications related to generic Linux systems rather than major cloud companies.