Secure Practice has developed an innovative service to help deliver targeted security training to your colleagues, as well as key performance indicators (KPIs) for the human side of your organization's security work.
A unique combination of multi-disciplinary research, data model, machine learning algorithms and visualizations altogether have made this kind of innovation possible.
Privacy and data protection has also been key to building a sustainable approach for human cyber risk management with the desired level of accuracy.
After all, a data-driven service like this evolves around data, and by measuring human risk we are definitely processing personal data.
With a human-centered approach to both product development and data protection, our team at Secure Practice has always worked to build user experiences which people love. And to achieve this, we believe that building trust through privacy and transparenency is imperative.
Regulatory Sandbox and DPIA
Therefore, it was a "no-brainer" for us to proactively apply for participation in the regulatory sandbox for responsible artificial intelligence (also known as the "AI sandbox"), when it was announced by the Norwegian Data Protection Authority (DPA) in early 2021.
Briefly told, the AI sandbox was funded by the Norwegian government to promote privacy innovation in artificial intelligence development projects. The Norwegian DPA selected four projects to participate in the first cohort, including Secure Practice. Along with our own efforts, the DPA offered a dedicated senior legal professional to work on our project (plus another one) for about 6 months. In addition, we benefited from multiple other project resources, such as a technologist and a sociologist, and also some common activities for all sandbox participants.
During the sandbox project, we held multiple workshops with the DPA concerning both legal, technical and socio-technical questions about our innovation in development. Together with the DPA, we also held workshops together the Equality and Anti-Discrimination Ombud (LDO) in Norway, with a group of labor organizations, and with employees as data subjects.
Based on this input, we also did a thorough Data Protection Impact Assessment (DPIA). With also valuable feedback from the DPA, the DPIA work resulted in a document containg updated risk assessments and privacy related information which is revised periodically. We are happy to make this document available for review by (potential) customers upon request.
The DPA also published a public version of the final project report in February 2022, on their websites. Here you can read about some very specific challenges we faced during the project, but in relation to topics of general interest to others.
Practical implications
Some of the legal questions which are discussed in the sandbox project and DPIA reports, include:
Data controllership: Who is responsible for complying with the privacy policy?
Legal basis: Can the tool be used and further developed legally?
Data subjects: How does the tool affect employees?
Through a very fruitful collaboration with the DPA on these questions, our sandbox participation resulted in both technical and legal efforts to reduce key risks.
The possibly most essential risk, was the risk of employers using individual cyber risk data to impose both negative implications for some, but also "positive" (yet still inequal) discrimination for others.
Therefore, the most essential output from our project was the absolute necessity to protect the identities of individual employees in relation to risk data.
As a consequence, we have implemented technical controls to ensure that individual risk scores are not exposed towards the employer.
However, with a standard legal setup for cloud-based software services like Secure Practice, the employer (our customer) could simply demand such risk data to be handed over upon request. This is because our customer is the data controller, and in line with our data processing agreement, Secure Practice is simply a data processor with a right to process data strictly limited to the instructions from the controller. And this would also include the customer's right to access individual risk scores.
A legal mitigation to this risk was nevertheless proposed by the DPA (and accepted as a solution by us):
Joint controllership for individual profiling data related to human cyber risk measurement.
As a consequence, we have included this arrangement in our standard data processing agreement. With this in place, Secure Practice is now in a position to reject any customer (employer) request for risk data relating to individual users (employees). Yet, for any other data, the standard legal setup with customer as controller and Secure Practice as processor will apply just like before.
While the idea of joint controllership may be new to many, it is really not that complicated. In practice, Secure Practice takes on an independent responsibility for privacy by design, transparency and information to users, responding to individual data subject requests as required (for the specific data under joint controllership), and for our own part be independently eligible for regulatory audits and fines.
Transparency and assurance
For our customers, joint controllership brings no real downsides compared to the alternative: With transparent risk scores for individual employees, the system would become an invasive surveillance measure, and possibly illegal to use in EU countries, given GDPR requirements for data subject rights to freedom and privacy.
Instead, Secure Practice can offer organizations a solution to measuring and managing human cyber risk on an individual level, which is thoroughly validated towards GDPR based on the following two clear purposes for processing:
Offer organizations with statistical data on human cyber risk;
Offer employees automated risk score-based targeted training.
As mentioned above, we will welcome any request for access to our DPIA document, which is well-suited to serve as a basis for your own independent risk assessment as our customer and (joint) data controller. Our team is proud to offer this high level of both innovation and of privacy and regulatory assurance.
For end-users, we have implemented a dedicated page in our learning portal, with information about data processing, privacy and transparency features. This page is linked to from the learning portal dashboard, and you are hereby encouraged to tell your colleagues about it.
Should you also have any other questions in relation to our service(s), feel free to contact support anytime.