Version 1.1 – 12/06/2018
If you believe you have found a vulnerability in any one of our applications, we would very much appreciate it if you did not disclose it publicly but instead send an email to firstname.lastname@example.org. We have an internal Security Incident Response policy that dictates how we will handle, escalate, and communicate an incident.
We apply CIS hardening when building our AMIs used to deploy nodes. All systems are monitored and regularly scanned for vulnerabilities and anomalies.
All commands run on a system, system, application component and application requests are all logged and shipped to our third-party logging aggregator encrypted with TLS. We use technologies such as Elasticsearch/Kibana, AWS Cloudtrail and Wazuh to provide an audit trail over our infrastructure and the our applications. Auditing allows to do ad-hoc security analysis, track changes made to our setup and audit access to every layer of our stack.
Every microservice runs inside a well-defined Docker container that allows specific levels of access to select controllers. We use Docker to avoid erroneous instance-configuration changes, upgrades, and corruption that are common sources of security breaches. We employ a least-privilege philosophy for all of our services and employees.
We take the necessary precautions to ensure that every layer involved in data transfer is secured by best-of-breed technologies. Our network is segmented/controlled using AWS security groups, VPCs, NACLs, and additional measures at the application level. Through in-depth network monitoring, we are able to detect anomalies and take a proactive approach to eliminating potential breaches.
We have functioning, frequently used automation in place so that we can safely and reliably rollout changes to both our application and operating platform within minutes. We typically deploy several times a day, so we have high confidence that we can get a security fix out quickly when required.
Only employees who require it are allowed to access production data. Role-based access control and 2FA is employed wherever possible and prudent. All access to production data is logged for auditing purposes. We review which accounts can access our systems and the permissions they have regularly.
All data in transit is encrypted with TLS v1.2. All sensitive data at rest is encrypted with AES-256-GCM. Sensitive data is defined as data related to any users or the business. Non-sensitive data includes public flight information that is not linked to any user.
We rely on the Amazon cloud’s exceptionally flexible and secure cloud infrastructure to store data logically across multiple AWS cloud regions and availability zones. AWS makes abiding by industry and government requirements simple and ensures the utmost in data security and protection. For example, AWS infrastructure aligns with IT security best practices and follows a number of compliance standards. All data centers that run our solution are secured and monitored 24/7, and physical access to AWS facilities is strictly limited to select AWS cloud staff. Please click here to read more about AWS Datacenter controls.
Lumo is not in the business of storing or processing payments. All payments made to Lumo go through Stripe. Details about their security setup and PCI compliance can be found at Stripe’s security page.
We conduct a company-wide security awareness training that provides education on the type of data that Lumo has (including both business and customer data), how we protect it, and top security vulnerabilities that the entire organization should be aware of (weak passwords, phishing, etc).
In addition, we conduct an annual engineering-focused security awareness training aimed at anyone with access to any Lumo infrastructure or customer data. This training goes into technical details about attack vectors and how the engineering team can keep our business and sensitive data safe.