Welcome to the EOSC Future wiki!

IMPORTANT: Please be aware that this Wiki space is no longer actively maintained. The information in it applies to the discontinued EOSC Marketplace and Provider Portal, which have been replaced by the EOSC EU Node.
Information related to the EOSC EU Node is available via the official page <here>

The EOSC Security Operational Baseline sets minimum expectations and puts requirements on the behaviour of those offering services to users, and on communities connected to the EOSC, when interacting with the EOSC infrastructure and peer services. Worded in an intentionally concise manner, the 12 key requirements may give rise to additional questions, or in general can benefit from concrete examples and guidance. In this "FAQ" document, each of the key baseline items is put in context with additional examples, best practices, and generally helpful ideas.


Can you elaborate on what is meant by item 9 and its incident response requirements?

Item 3 talks about security incident response. In an interwoven environment it is vital that data about incidents is shared and communicated to detect, analyse, contain and eradicate malicious actors while preserving the necessary evidence for analysis and post-processing. For EOSC, there is a dedicated team of incident response specialists to aid with this task. This team can also communicate between different service providers affected by the incident, help in getting necessary data from related services and disseminate data to help others.

For incident response, there is a documented process you can find from the EOSC Wiki. It acts as a recommendation and guideline to help different actors in case of computer security incidents. It is strongly recommended that all service providers implement the procedure as ably as possible, but in such a way that it serves the needs which are recognised by the service owners and operators. The starting point for all providers is to be aware of the process and from where they can get help in case of need, as well as understanding the need to share information to protect EOSC and other service providers.

You can find the procedure in EOSC Future ISM.

The EOSC incident response team can be contacted via abuse AT eosc-security.eu.

What are 'IT security best practices' in item 7?

On a global scale there are myriad different documents and sources defining best practices to secure different types of information systems and even the entire organisations. It is important to follow well known recommendations that fit your needs. This can depend on the scale of your service, organisation, technology choices and even your service’s location. Many organisations have a set of requirements derived from for example certifications like ISO 27000 or legislation like GDPR. The organisation might also have their own conventions, which you might find out from internal sources. It is important that you take these into consideration, as well as add the building blocks which are relevant to your service. Some resources can be found from below. These can be a starting point to you, especially if there are no written security policies or recommendations to follow in your organisation.

Generic information security

  1. ISO standardisation, for example ISO 27000 which covers information systems security. Generic and vast standard spanning over several activities, from technical aspects to governance and processes. Closed standard.
  2. National standards, offered by for example national public offices, examples being the national cyber security centers. These can offer a wide variety of guides, criteria and current data covering various security aspects. These can also address local legislation requirements, but the targets can be public organisations like public offices, government related services or individuals.
  3. NIST (https://www.nist.gov/cybersecurity) and CISA (https://www.cisa.gov/cybersecurity) provide guidelines and recommendations on various topics. A good starting point could be for example CISA’s Cyber Essentials Starter Kit and NIST’s cyber security framework.
  4. CIS (https://www.cisecurity.org/cybersecurity-best-practices/), see for example the CIS Controls, which is a good starting point.
  5. SANS (https://www.sans.org) provides guidelines and trainings on various topics

Cloud platforms

  1. Cloud security alliance (https://cloudsecurityalliance.org/) provides resources on cloud security. Their cloud controls matrix is an extensive checklist to verify your cloud environment’s security.
  2. BSI C5, Cloud Computing Compliance Controls Catalogue (https://www.bsi.bund.de/SharedDocs/Downloads/EN/BSI/Publications/CloudComputing/ComplianceControlsCatalogue-Cloud_Computing-C5.pdf)
  3. Several nations provide their standards, which may be targeted to classified data. These can nevertheless give good ideas on how to secure your cloud platform.

Software development

  1. OWASP (https://owasp.org/) provides extensive documentation and various tools (even software like OWASP ZAP). OWASP ASVS (OWASP Application Security Verification Standard) is a tool to ensure that your software has capabilities to defend against common attacks.
  2. Microsoft SDLC (https://www.microsoft.com/en-us/securityengineering/sdl/) covers the entire lifecycle of a secure software development
  3. NIST SSDF, secure software development framework (https://csrc.nist.gov/Projects/ssdf)
  4. It is very likely that there are good resources from your country of origin.

When securing services, the non-exhaustive checklist below can guide you to find topics you can start from.

  1. Various procedures, for example incident response, business continuity, disaster recovery, training, legal obligations
  2. Network segmentation
  3. Separation of duties
  4. Procedures to apply updates
  5. Procedures to monitor update needs, to rate their severity and applicability
  6. Logging
  7. Monitoring
  8. Backups
  9. System hardening and preserving the configuration baseline during the life-cycle
  10. Encryption, key handling and distribution, secure key generation
  11. System and software catalogues - namely asset catalogues or asset inventories - to help in various activities, like system updates
  12. Firewalling
  13. Antivirus, malware scanning, IDS/HIDS
  14. Minimised user rights - the principle of least privilege
  15. Change management, trail of actions
  16. Authentication, authorisation
  17. Secure software development

And when I find a vulnerability or a security flaw? In my service, or somewhere else?

No software or service is flawless, and hiding problems will just give a false feeling of security. But telling the supplier in a trusted and responsible way is equally important: developers of software (and also hardware!) often require time and resources to repair their mistakes - and service providers using that software need time to deploy the changes to production because vulnerabilities get exploited by miscreants. "In computer security, responsible disclosure (also known as 'coordinated vulnerability disclosure') is a vulnerability disclosure model in which a vulnerability or an issue is disclosed only after a period of time that allows for the vulnerability or issue to be patched or mended." - as the Wikipedia article nicely states. For this to work, you need to so either of two things, depending on your role:

  • if you find a vulnerability or security issue, report it in confidence to the service provider or development team, and give them time to respond. If you already see exploits in the wild, tell them. If you don't know who the supplier is, or need help, contact the EOSC Security Response team (abuse@eosc-security.eu)
  • if you operate a service, develop software, or publish data-sets (which can also contain sensitive information!), provide an communications entry point specifically for security reports. A vulnerability report, responsibly disclosed by the researcher, should not end up in a public or general ticketing system. And respond in a timely, honourable, and cooperative way to the reporter - who is kindly helping you prevent security incidents by disclosing responsibly!

OWAPS also has some good and concise guidance on disclosure.

What does "honour the confidentiality requirements of information" in item 4 mean?

Information essential for the secure operation of a Service, such as names, email and telephone contact numbers of service operators, network addresses and associated configuration information and non-public security (CSIRT) contact data and threat intelligence may be exchanged as part of normal service operation or during a security incident investigation. Any obligations governing the sharing or publication of such information must be honoured.

What are “the legal and contractual rights of Users and others with regard to their personal data processed as part of service delivery” in item 5?

Already by law, users (and communities) already have rights. These rights must of course be respected: they include protection of personal data (“GDPR”), as well as rights under the e-Privacy directive, the ‘cookie laws’, information security directives (like “NIS” for critical and important infrastructure), and national regulations. But there are more considerations: in your acceptable use policy (such as the WISE Baseline AUP) you grant users certain rights and raise expectations regarding how their data are used – and this becomes a form of ‘contractual’ right. For example, you use this data – usually collected merely because the user accesses the service – for operational purposes, access control, accounting, and for investigating operational incidents like slow response, service outages, and so on. And for security incidents. Yet in the WISE Baseline AUP for instance, you explicitly promised that you shall not use that data for other purposes.

Also the data that a user puts into your service, like data sets and collections, databases, and other digital objects, come with specific rights for the user. Those should be part of your (implicit) contract with the user. For instance, what a service hosts medical datasets, there is an extensive set of controls that govern how data may be processed. And if users store data that has commercial value, you should also honour confidentiality of such data in accordance with your agreement with the user or community.

"Retain system generated information (logs)" in item 6 sounds rather open-ended. What do I need to do? And why?

The minimum level of traceability for use of the IT Infrastructure is to be able to identify the source of all actions (executables, file transfers, pilot jobs, portal jobs, virtual machine management, image management, etc.) and the individual who initiated them.  It’s crucial that the logs you collect are kept safe, collected centrally, and protected from modification by any attacker - the first thing an attacker will do is to hide the traces of an intrusion, so removing or re-writing log data. Have your log collection in a separate place, on a protected server, or send it to a (in-house or contracted) security operations centre collection point. Even if you now think that your service ‘only serves up public data’, you should still collect logs - what happens if one of your datasets has been replaced by malware after an incident? Do you then still know who downloaded that data, and thus who else has been exposed? Can you prevent further spreading? For that, you need logs.

The aim is to be able to answer the basic questions who, what, where, and when concerning any incident. This requires retaining all relevant information, including timestamps and the digital identity of the user, sufficient to identify, for each service instance, and for every security event including at least the following: connect, authenticate, authorize (including identity changes) and disconnect. In addition, it is important to know the most valuable and vulnerable assets in your system are, and gather logs on their state. This can be for example to detect any integrity or availability violations, and sufficient log data to track what has happened, when, by whom and so forth.

The logs are usually your number one source for evidence when investigating issues. This is why it is important to transfer them to a separate platform in case of for example massive hardware problems, data corruption and also security issues. In addition, an aggregated log source can make implementing monitoring or log analysis a lot easier later on. In case of security breach, deleting any trace of the attack is a priority for the malicious actor, which can often be mitigated by transferring the logs away with a small delay

“Aggregated centrally wherever possible, and protected from unauthorised access or modification” in item 6, how and why?

It is vital that you keep your centralised log service safe. This is usually done by separating the system on a logical level and especially by applying the principle of separation of duties - if a person wants to hide his malicious actions on a server, he won’t have access to the remote log storage because of access restrictions. This ensures that the chain of proof can be trusted - both to prove that a certain action was done and also to prove if the logs have been tampered with, which might prove an attempt at framing.

There are several technologies, platforms and transfer protocols that can be used to store and aggregate the logs. In addition to commercial products, open source software can also be used. Logs can be transferred for example using rsyslog (UDP/514 or TCP) protocol, which is easily installable to any Linux-based distributions. Software stacks like ELK stack or OpenSearch can provide different tools for log analysis and monitoring on top of your log storage. Also, the ‘central log service’ may not be a single system - it could well be a set of systems, or a protected immutable database cluster, depending on the size and complexity of your services.

In addition, sufficiently fine-grained controls, such as and monitoring to detect abnormal behaviour, are necessary for keeping services operational. It is essential to be able to understand the cause, for instance by temporarily suspending user access, and to fix any problems before re-enabling access for the user.

Log aggregation in the layered and composite infrastructure of EOSC

If the service has topologically vertical or horizontal dependencies (see the definition of ‘layered technology stack’), these should be taken into account. The service provider should ensure that the dependent services have similar solutions for log storage. To fully benefit from available logs, there should be channels for co-operation and information exchange, which can be achieved by dedicated contact points, processes and agreements.

It should be noted that the chain of proof should be consistent throughout the service stack. This might require some synchronization in the data fields, precision and extent gathered from actions, not forgetting synchronized retention periods and well-defined timestamps.

In some cases there might be limitations in what can be achieved. It is vital that the problems, gaps and discrepancies are recognized and mitigated as well as possible.

What about the 'reconstruction of a coherent and complete view of activity' when you have a a ‘layered technology stack’ mentioned in item 6?

Layered technology stack is a term used to describe services that depend on multiple services, usually provided by different operators. Also a term composite services could be used. For example, the physical service can be provided by one provider, an operating system layer on top of that by another provider, and several interconnected software layers on top of this all which the end product could consist of. Security and operational-wise, it is necessary for these to fulfil similar requirements, to ensure that other layers aren’t for instance putting other layers at risk or making any investigations impossible.

Case example JupyterLab:

  1. IaaS cloud provider
  2. Kubernetes provider
  3. JupyterLab provider

Conclusion: it must be ensured that the trail of actions is preserved across all the services. For example security weaknesses in any of these services might serve as a part of the attack vector to access the others by pivoting.

Case example Wiki service:

  1. IaaS cloud provider
  2. Shared storage provider
  3. Wiki provider, who produces instances of the wiki platform to several customers
  4. A wiki customer, who is using external SSO service for authentication

Conclusion: the shared services can be fragile and require isolation. A comprehensive trail of actions are needed to understand the details of malfunction or intrusion, especially if the shared service’s weakness has been used to attack another Wiki provider. It is vital to extend the requirements to all Wiki providers and also the external SSO provider, in addition to the underlying platform.

What are “Named persons”?

The "named persons" should be designated, but not necessarily disclosed or registered with EOSC. A role is not sufficient. Because roles may be unoccupied, but a person can be found. In the end the boss goes to jail anyway in the end.  But at some point this also needs to be 'tested'. "Someone should own the process of monitoring this baseline, and that's the person you want named here". So it's a process contact point, like in ITSM processes.

The Trusted CI framework also addresses this issue by emphasizing the need for sufficient staff effort.

The generic contact points come out of Sirtfi anyway.


  • No labels