diff --git a/5.12.X/404.html b/5.12.X/404.html new file mode 100755 index 00000000..3a6eae8c --- /dev/null +++ b/5.12.X/404.html @@ -0,0 +1,3934 @@ + + + +
+ + + + + + + + + + + + + + + + + + + + + +Enterprise edition
+Activity unified interface and logging are available under the "Filigran entreprise edition" license.
+ +As explained in overview page, all administration actions are listen by default. +However, all knowledge are not listened by default due to performance impact on the platform.
+For this reason you need to explicitly activate extended listening on user / group or organization.
+ +Listening will start just after the configuration. Every past events will not be taken into account.
+ + + + + + + + + + + + + + + + + + +Enterprise edition
+Activity unified interface and logging are available under the "Filigran entreprise edition" license.
+ +OpenCTI activity capability is the way to unified whats really happen in the platform. +In events section you will have access to the UI that will answer to "who did what, where, and when?" within your data with the maximum level of transparency.
+ +By default, the events screen only show you the administration actions done by the users.
+If you want to see also the information about the knowledge, you can simply activate the filter in the bar to get the complete overview of all user actions.
+Don't hesitate to read again the overview page to have a better understanding of the difference between Audit, Basic/Extended knowledge.
+ + + + + + + + + + + + + + + + + + +Enterprise edition
+Activity unified interface and logging are available under the "Filigran entreprise edition" license.
+ +OpenCTI activity capability is the way to unified whats really happen in the platform. +With this feature you will be able to answer "who did what, where, and when?" within your data with the maximum level of transparency. +Enabling activity helps your security, auditing, and compliance entities monitor platform for possible vulnerabilities or external data misuse.
+The activity group 3 different concepts that need to be explains.
+The basic knowledge refers to all stix data knowledge inside OpenCTI. Every create/update/delete actions on that knowledge is accessible through the history. +That basic activity is handled by the history manager and can be also found directly on each entity.
+The extended knowledge refers to extra information data to track specific user activity. +As this kind of tracking is expensive, the tracking will only be done for specific user/group/organization explicitly configured.
+Audit is focusing on user administration or security actions. +Audit will produces console/logs files along with user interface elements.
+{
+ "auth": "<User information>",
+ "category": "AUDIT",
+ "level": "<info | error>",
+ "message": "<human readable explanation>",
+ "resource": {
+ "type": "<authentication | mutation>",
+ "event_scope": "<depends on type>",
+ "event_access": "<administration>",
+ "data": "<contextual data linked to the event type>",
+ "version": "<version of audit log format>"
+ },
+ "timestamp": "<event date>",
+ "version": "<platform version>"
+}
+
OpenCTI use different mechanisms to be able to publish actions (audit) or data modification (history)
+ + +Administration or security actions
+With Enterprise edition activated, Administration and security actions are always written; you can't configure, exclude, or disable them
+Supported
+Not supported for now
+Not applicable
++ | Create | +Delete | +Edit | +
---|---|---|---|
Remote OCTI Streams | ++ | + | + |
+ | Create | +Delete | +Edit | +
---|---|---|---|
CSV Feeds | ++ | + | + |
TAXII Feeds | ++ | + | + |
Stream Feeds | ++ | + | + |
+ | Create | +Delete | +Edit | +
---|---|---|---|
Connectors | ++ | + | State reset | +
Works | ++ | + | + |
+ | Create | +Delete | +Edit | +
---|---|---|---|
Platform parameters | ++ | + | + |
+ | Create | +Delete | +Edit | +
---|---|---|---|
Roles | ++ | + | + |
Groups | ++ | + | + |
Users | ++ | + | + |
Sessions | ++ | + | + |
Policies | ++ | + | + |
+ | Create | +Delete | +Edit | +
---|---|---|---|
Entity types | ++ | + | + |
Rules engine | ++ | + | + |
Retention policies | ++ | + | + |
+ | Create | +Delete | +Edit | +
---|---|---|---|
Status templates | ++ | + | + |
Case templates + tasks | ++ | + | + |
+ | Listen | ++ | + |
---|---|---|---|
Login (success or fail) | ++ | + | + |
Logout | ++ | + | + |
Unauthorized access | ++ | + | + |
Extended knowledge
+Extented knowledge activity are written only if you activate the feature for a subset of users / groups or organizations
+Some history actions are already included in the "basic knowledge". (basic marker)
++ | Read | +Create | +Delete | +Edit | +
---|---|---|---|---|
Platform knowledge | ++ | basic | +basic | +basic | +
Background tasks Knowledge | ++ | + | + | + |
Knowledge files | ++ | basic | +basic | ++ |
Global data import files | ++ | + | + | + |
Analyst workbenches files | ++ | + | + | + |
Triggers | ++ | + | + | + |
Workspaces | ++ | + | + | + |
Investigations | ++ | + | + | + |
User profile | ++ | + | + | + |
+ | Supported | ++ | + | + |
---|---|---|---|---|
Ask for file import | ++ | + | + | + |
Ask for data enrichment | ++ | + | + | + |
Ask for export generation | ++ | + | + | + |
Execute global search | ++ | + | + | + |
Enterprise edition
+Activity unified interface and logging are available under the "Filigran entreprise edition" license.
+ +Having all the history in the user interface (events) its sometimes not enough to have a proactive monitoring. +For this reason you can configure some specific triggers to receive notifications on audit events. +You can configure like personal triggers, lives one that will be sent directly or digest depending on your needs.
+In this kind of trigger you will have to configure different options: +- Notification target: User interface or email +- Recipients: who will receive the notification +- Filters: a set of filters to get only events that really interested you. (who is responsible for this event, kind of events, ...)
+In order to correctly configure the filters, here's a definition of the event structure
+authentication
Event scopes: login
and logout
Event type: read
read
and unauthorized
Event type: file
read
, create
and delete
Event type: mutation
unauthorized
, update
, create
and delete
Event type: command
search
, enrich
, import
and export
In OpenCTI, CSV Mappers allow to parse CSV files in a STIX 2.1 Objects. The mappers are created and configured by users with the Manage CSV mappers capability and +then made available to users who import CSV files, for instance inside a report or in the global import view, and want to extract information inside these files.
+The mapper contains representations of STIX 2.1 entities and relationships, in order for the parser to properly extract them.
+One mapper is dedicated to parsing a specific CSV file structure, and thus dedicated mappers should be created for each
+and every specific CSV structure you might need to ingest in the platform.
In menu Data, select the submenu Processing, and on the right menu select CSV Mappers. You are presented with a list of all the mappers set in the platform. +Note that you can delete or update any mapper from the context menu via the burger button beside each mapper.
+Click on the button + in the bottom-right corner to add a new Mapper.
+Enter a name for your mapper and some basic information about your CSV files:
+Info
+Note that the parser will not extract any information from the CSV header if any ; it will just skip +the first line during parsing.
+Then, you need to create every representation, one per entity and relationship type represented in the CSV file. +Click on the + button to add an empty representation in the list, and click on the chevron to expand the section and configure the representation.
+Depending on the entity type, the form contains the fields that are either required (input outlined in red) or optional. +For each field, set the corresponding columns mapping (the letter-based index of the column in the CSV table, as presented in common spreadsheet tools).
+References to other entities should be picked from the list of all the other representations already defined earlier in the mapper.
+You can do the same for all the relationships between entities that might be defined in this particular CSV file structure.
+ +Fields might have options besides the mandatory column index, to help extract relevant data.
++
or |
)The only parameter required to save a CSV Mapper is a name ; creating and refining its representations can be done iteratively.
+All CSV Mappers go through a quick validation that checks if all the representations have all their mandatory fields set. +Only valid mappers can be run by the users on their CSV files.
+Mapper validity is visible in the list of CSV Mappers as shown below.
+ +In the creation or edition form, hit the button Test to open a dialog. Select a sample CSV file and hit the Test button.
+The code block contains the raw result of the parsing attempt, in form of a STIX 2.1 bundle in JSON format.
+You can then check if the extracted values match the expected entities and relationships.
+ +You can change the default configuration of the import csv connector in your configuration file. +
"import_csv_built_in_connector": {
+ "enabled": true,
+ "interval": 10000,
+ "validate_before_import": false
+},
+
In Data import section, or Data tab of an entity, when you upload a CSV, you can select a mapper to apply to the file. +The file will then be parsed following the representation rules set in the mapper.
+By default, the imported elements will be added in a new Analyst Workbench where you will be able to check the result of the import.
+ + + + + + + + + + + + + + + + + + +Filigran
+Filigran is providing an Enterprise Edition of the platform, whether on-premise or in the SaaS.
+OpenCTI Enterprise Edition is based on the open core concept. This means that the source code of OCTI EE remains open source and included in the main GitHub repository of the platform but is published under a specific license. As precised in the GitHub license file:
+The OpenCTI Community Edition is licensed under the Apache License, Version 2.0 (the “Apache License”). +The OpenCTI Enterprise Edition is licensed under the OpenCTI +Non-Commercial License (the “Non-Commercial License”). +The source files in this repository have a header indicating which license they are under. If no such header is provided, this means that the file is belonging to the Community Edition under the Apache License, Version 2.0.
+We write a complete article to explain the enterprise edition, feel free to read it to have more information
+Enterprise edition is easy to activate. You need to go the the platform settings and click on the Activate button.
+ +Then you will need to agree to the Filigran EULA.
+ +As a reminder:
+Audit logs help you answer "who did what, where, and when?" within your data with the maximum level of transparency. Please read Activity monitoring page to get all information.
+OpenCTI playbooks are flexible automation scenarios which can be fully customized and enabled by platform administrators to enrich, filter and modify the data created or updated in the platform. Please read Playbook automation page to get all information.
+Organizations segregation is a way to segregate your data considering the organization associated to the users. Useful when your platform aims to share date to multiple organizations that share the access to the same OpenCTI platform. See Organizations RBAC
+More feature will be available in OpenCTI in the future. Features like:
+The following chapter aims at giving the reader an understanding of possible options by entity type. Customize entities can be done in « Settings » → « Customization ».
+ +This configuration hides a specific entity type across the entire platform. It is a powerfull way to simplify the interface and focus on your domain expertise. For example, if you are not interested in disinformation campaign, you can hide related entities like Narratives and Channels from the menus.
+You can define which Entities to hide platform-wide from « Settings » → « Customization », and also from « Settings » → « Parameters » giving you a list of hidden entities.
+You can also define hidden entities for specific users Groups and users Organizations, from « Settings » → « Security » → « Groups/Organizations » and editing a Group/Organization.
+An overview is available in Parameters > Hidden entity types.
+This configuration enables an entity to automatically construct an external reference from the uploaded file.
+This configuration enables the requirement of a reference message on an entity creation or modification. This option is helpfull if you want to keep a strong consistency and traceability of your Knowledge and is well suited for manual creation and update.
+For now, OpenCTI have a simple workflow approach.
+The available status for an entity is first defined by a collection of status templates (that can be defined from « Settings » → « Taxonomies » → « Status Template »).
+Then, a workflow can be defined by ordering a sequence of status template.
+ + +In an Entity, each attribute offers some customization options :
+Confidence scale can be customized for each entity type by selecting another scale template or by editing directly the scale values. +Once you have customized your scale, click on "Update" to save your configuration.
+ + + + + + + + + + + + + + + + + + + +This guide aims to give you a full overview of the OpenCTI features and workflows. The platform can be used in various contexts to handle threats management use cases from a technical to a more strategic level.
+The OpenCTI Administrative settings console allows administrators to configure many options dynamically within the system. As an Administrator, you can access this settings console, by clicking the settings link. +
+The Settings Console allows for configuration of various aspects of the system.
+Various aspects of the Dark Theme can be dynamically configured in this section.
+Various aspects of the Light Theme can be dynamically configured in this section.
+This section will give general status on the various tools and enabled components of the currently configured OpenCTI deployment.
+ + + + + + + + + + + + + + + + + + +Within the OpenCTI platform, the merge capability is present into the "Data > Entities" tab, and is fairly straightforward to use. To execute a merge, select the set of entities to be merged, then click on the Merge icon. NB: it is not possible to merge entities of different types, nor is it possible to merge more than 4 entities at a time (it will have to be done in several stages).
+ +Central to the merging process is the selection of a main entity. This primary entity becomes the anchor, retaining crucial attributes such as name and description. Other entities, while losing specific fields like descriptions, are aliased under the primary entity. This strategic decision preserves vital data while eliminating redundancy.
+ +Once the choice has been made, simply validate to run the task in the background. Depending on the number of entity relationships, and the current workload on the platform, the merge may take more or less time. In the case of a healthy platform and around a hundred relationships per entity, merge is almost instantaneous.
+A common concern when merging entities lies in the potential loss of information. In the context of OpenCTI, this worry is alleviated. Even if the merged entities were initially created by distinct sources, the platform ensures that data is not lost. Upon merging, the platform automatically generates relationships directly on the merged entity. This strategic approach ensures that all connections, regardless of their origin, are anchored to the consolidated entity. Post-merge, OpenCTI treats these once-separate entities as a singular, unified entity. Subsequent information from varied sources is channeled directly into the entity resulting from the merger. This unified entity becomes the focal point for all future relationships, ensuring the continuity of data and relationships without any loss or fragmentation.
+Under construction
+We are doing our best to complete this page. +If you want to participate, don't hesitate to join the Filigran Community on Slack +or submit your pull request on the Github doc repository.
+This part of the interface wil let you configure global platform settings, like title, favicon, etc.
+It will also give you important information about the platform.
+Configure global platform settings, like title, favicon, etc.
+Important information about the platform.
+It's also the place to activate the Enterprise edition
+This section gives you the possibility to set and display Announcements in the platform. Those announcements will be visible to every user in the platform, on top of the interface.
+They can be used to inform all your users' community of important information, like a scheduled downtime, an incoming upgrade, or even an important tips regarding the usage of the platform.
+ +An Announcement can be accompanied by a "Dismiss” button. When click by a user, it makes the message disappear for this user.
+ +This option can be deactivated to have a permanent Announcement.
+ +⚠️ Only one Announcement is displayed at a time. Dismissible Announcements are displayed first, then the latest not dismissible Announcement.
+Enterprise edition
+Analytics is available under the "Filigran entreprise edition" license.
+ +Configure analytics providers (at the moment only Google analytics v4).
+ + + + + + + + + + + + + + + + + + +Allow to set a main organization for the entire platform.
+All the pieces of knowledge must be shared with the organization of the user wishing to access it or this user need to +be inside the main organization.
+There are several authentication strategies to connect to the platform.
+Please see the Authentication section for further details.
+Allow to define the password policy according to several criteria in order to strengthen the security of your platform, +namely: minimum/maximum number of characters, number of digits, etc.
+Allow to define login, consent and consent confirm message to customize and highlight your platform's security policy
+Allow OpenCTI deployments to have a custom banner message (top and bottom) and colored background +for the message (Green, Red, or Yellow). Can be used to add a disclaimer or system purpose that will be displayed +at the top and bottom of the OpenCTI instances pages.
+This configuration has two parameters:
+The rules engine comprises a set of predefined rules (named inference rules) that govern how new relationships are inferred based on existing data. These rules are carefully crafted to ensure logical and accurate relationship creation. Here is the list of existing inference rules:
+Conditions | +Creations | +
---|---|
A non-revoked Indicator is sighted in an Entity | +Creation of an Incident linked to the sighted Indicator and the targeted Entity | +
Conditions | +Creations | +
---|---|
An Indicator is based on an Observable contained in an Observed Data | +Creation of a sighting between the Indicator and the creating Identity of the Observed Data | +
Conditions | +Creations | +
---|---|
An Indicator based on an Observable is sighted in an Entity | +The Observable is sighted in the Entity | +
Conditions | +Creations | +
---|---|
An Indicator is based on an Observable sighted in an Entity | +The Indicator is sighted in the Entity | +
Conditions | +Creations | +
---|---|
An observable is related to two Entities | +Create a related to relationship between the two Entities | +
Conditions | +Creations | +
---|---|
An Entity A is attributed to an Entity B and this Entity B is itself attributed to an Entity C | +The Entity A is attributed to Entity C | +
Conditions | +Creations | +
---|---|
An Entity A is part of an Entity B and this Entity B is itself part of an Entity C | +The Entity A is part of Entity C | +
Conditions | +Creations | +
---|---|
A Location A is located at a Location B and this Location B is itself located at a Location C | +The Location A is located at Location C | +
Conditions | +Creations | +
---|---|
A User is affiliated with an Organization B, which is part of an Organization C | +The User is affiliated to the Organization C | +
Conditions | +Creations | +
---|---|
A Report contains an Identity B and this Identity B is part of an Identity C | +The Report contains Identity C, as well as the Relationship between Identity B and Identity C | +
Conditions | +Creations | +
---|---|
A Report contains a Location B and this Location B is located at a Location C | +The Report contains Location B, as well as the Relationship between Location B and Location C | +
Conditions | +Creations | +
---|---|
A Report contains an Indicator and this Indicator is based on an Observable | +The Report contains the Observable, as well as the Relationship between the Indicator and the Observable | +
Conditions | +Creations | +
---|---|
An Entity A, attributed to an Entity C, uses an Entity B | +The Entity C uses the Entity B | +
Conditions | +Creations | +
---|---|
An Indicator, sighted at an Entity C, indicates an Entity B | +The Entity B targets the Entity C | +
Conditions | +Creations | +
---|---|
An Entity A, attributed to an Entity C, targets an Entity B | +The Entity C targets the Entity B | +
Conditions | +Creations | +
---|---|
An Entity A targets an Identity B, part of an Identity C | +The Entity A targets the Identity C | +
Conditions | +Creations | +
---|---|
An Entity targets a Location B and this Location B is located at a Location C | +The Entity targets the Location C | +
Conditions | +Creations | +
---|---|
An Entity A targets an Entity B and this target is located at Location D. | +The Entity A targets the Location D | +
When a rule is activated, a background task is initiated. This task scans all platform data, identifying existing relationships that meet the conditions defined by the rule. Subsequently, it creates new objects (entities and/or relationships), expanding the network of insights within your threat intelligence environment. Then, activated rules operate continuously. Whenever a relationship is created or modified, and this change aligns with the conditions specified in an active rule, the reasoning mechanism is triggered. This ensures real-time relationship inference.
+Deactivating a rule leads to the deletion of all objects and relationships created by it. This cleanup process maintains the accuracy and reliability of your threat intelligence database.
+Access to the rule engine panel is restricted to administrators only. Regular users do not have visibility into this section of the platform. Administrators possess the authority to activate or deactivate rules.
+The rules engine empowers OpenCTI with the capability to automatically establish intricate relationships within your data. However, these rules can lead to a very large number of objects created. Even if the operation is reversible, an administrator should consider the impact of activating a rule.
+Under construction
+We are doing our best to complete this page. +If you want to participate, don't hesitate to join the Filigran Community on Slack +or submit your pull request on the Github doc repository.
+Data segregation in the context of Cyber Threat Intelligence refers to the practice of categorizing and separating different types of data or information related to cybersecurity threats based on specific criteria.
+This separation helps organizations manage and analyze threat intelligence more effectively and securely and the goal of data segregation is to ensure that only those individuals who are authorized to view a particular set of data have access to that set of data.
+Practically, "Need-to-know basis" and "classification level" are data segregation measures.
+Marking definitions are essential in the context of data segregation to ensure that data is appropriately categorized and protected based on its sensitivity or classification level. Marking definitions establish a standardized framework for classifying data.
+Marking Definition objects are unique among STIX objects in the STIX 2.1 standard in that they cannot be versioned. This restriction is in place to prevent the possibility of indirect alterations to the markings associated with a STIX Object.
+Multiple markings can be added to the same object. Certain categories of marking definitions or trust groups may enforce rules that specify which markings take precedence over others or how some markings can be added to complement existing ones.
+In OpenCTI, data is segregated based on knowledge marking. The diagram provided below illustrates the manner in which OpenCTI establishes connections between pieces of information to authorize data access for a user:
+ +The Traffic Light Protocol is implemented by default as marking definitions in OpenCTI. It allows you to segregate information by TLP level in your platform and restrict access to marked data if users are not authorized to see the corresponding marking.
+The Traffic Light Protocol (TLP) was designed by the Forum of Incidence Response and Security Teams (FIRST) to provide a standardized method for classifying and handling sensitive information, based on four categories of sensitivity.
+For more details, the diagram provided below illustrates how are categorized the marking definitions:
+ +In order to create a marking, you must first have the ability to access the Settings tab. For example, a user who is in a group with the role of Administrator can bypass all capabilities or a user who is in a group with the role that has Access administration
checked can access the Settings tab. For more details about user administration here: Users and Role Based Access Control
Once you have access to the settings, you can create your new marking in Security
-> Marking Definitions
A marking has:
+In order for all users in a group to be able to see entities and relationships that have specific markings on them, allowed markings can be checked when updating a group:
+ +To apply a default marking when creating a new entity or relationship, you can choose which marking to add by default from the list of allowed markings. You can add only one marking per type, but you can have multiple types.
+ +Be careful, add markings as default markings is not enough to see the markings when you create an entity or relationship, you need to enable default markings in an entity or relationship customization.
+For example, if you create a new report, got to Settings
-> Customization
-> Report
-> Markings
and click on Activate/Desactivate default values
To authorize a group to automatically have access to a newly created marking definition in allowed marking definitions, you can check Automatically authorize this group to new marking definition
when update a group:
When a new entity or a new relationship is created, if multiple markings of the same type and different order are added, the platform will only keep the highest order for each type.
+For example:
+Create a new report and add markings PAP:AMBER
,PAP:RED
,TLP:AMBER+STRICT
,TLP:CLEAR
and a statement CC-BY-SA-4.0 DISARM Foundation
The final markings kept are: PAP:RED
, TLP:AMBER+STRICT
and CC-BY-SA-4.0 DISARM Foundation
When update an entity or a relationship:
+When you merge multiple entities, the platform will keep the highest order for each type of markings when the merge is complete:
+For example, merging 2 observables, one with TLP:CLEAR
and PAP:CLEAR
and the other one with PAP:RED
and TLP:GREEN
from 198.250.250.11 to 197.250.251.12.
As final result, you will have the observable with the value 197.250.251.12 with PAP:RED
and TLP:GREEN
When you import data from a connector, the connector cannot downgrade a marking for the same entity, if a same type of marking is set on it.
+For example, if you create a new observable with same values as Alien Vault data and change marking in the platform as TLP:AMBER
, when importing data, the platform will keep the highest rank for the same type of markings.
Under construction
+We are doing our best to complete this page. +If you want to participate, don't hesitate to join the Filigran Community on Slack +or submit your pull request on the Github doc repository.
+In OpenCTI, the RBAC system not only related to what users can do or cannot do in the platform (aka. Capabilities
) but also to the system of data segregation. Also, platform behaviour such as default home dashboards, default triggers and digests as well as default hidden menus or entities can be defined across groups and organizations.
Roles are used in the platform to grant the given groups with some capabilities to define what users in those groupes can do or cannot do.
+Capability | +Description | +
---|---|
Bypass all capabilities |
+Just bypass everything including data segregation and enforcements. | +
Access knowledge |
+Access in read-only to all the knowledge in the platform. | +
Access to collaborative creation |
+Create notes and opinions (and modify its own) on entities and relations. | +
Create / Update knowledge |
+Create and update existing entities and relationships. | +
Restrict organization access |
+Share entities and relationships with other organizations. | +
Delete knowledge |
+Delete entities and relationships. | +
Upload knowledge files |
+Upload files in the Data and Content section of entities. |
+
Download knowledge export |
+Download the exports generated in the entities (in the Data section). |
+
Ask for knowledge enrichment |
+Trigger an enrichment for a given entity. | +
Access exploration |
+Access to workspaces whether custom dashboards or investigations. | +
Create / Update exploration |
+Create and update existing workspaces whether custom dashboards or investigations. | +
Delete exploration |
+Delete workspaces whether custom dashboards or investigations. | +
Access connectors |
+Read information in the Data > Connectors section. |
+
Manage connector state |
+Reset the connector state to restart ingestion from the beginning. | +
Access Taxii feed |
+Access and consume TAXII collections. | +
Manage Taxii collections |
+Create, update and delete TAXII collections. | +
Manage CSV mappers |
+Create, update and delete CSV mappers. | +
Access administration |
+Access and manage overall parameters of the platform in Settings > Parameters . |
+
Manage credentials |
+Access and manage roles, groups, users, organizations and security policies. | +
Manage marking definitions |
+Update and delete marking definitions. | +
Manage labels & Attributes |
+Update and delete labels, custom taxonomies, workflow and case templates. | +
Connectors API usage: register, ping, export push ... |
+Connectors specific permissions for register, ping, push export files, etc. | +
Connect and consume the platform streams (/stream, /stream/live) |
+List and consume the OpenCTI live streams. | +
Bypass mandatory references if any |
+If external references enforced in a type of entity, be able to bypass the enforcement. | +
You can manage the roles in Settings > Security > Roles
.
To create a role, just click on the +
button:
Then you will be able to define the capabilities of the role:
+ +You can manage the users in Settings > Security > Users
. If you are using Single-Sign-On (SSO), the users in OpenCTI are automatically created upon login.
To create a user, just click on the +
button:
When access to a user, it is possible to:
+Groups is the main vehicule to manage permissions and data segregation as well as platform customization for the given users part of this group. You can manage the groups in Settings > Security > Groups
.
Here is the description of the group available parameters.
+Parameter | +Description | +
---|---|
Auto new markings |
+If a new marking definition is created, this group will automatically be granted to it. | +
Default membership |
+If a new user is created (manually or upon SSO), it will be added to this group. | +
Roles |
+Roles and capabilities granted to the users belonging to this group. | +
Default dashboard |
+Customize the home dashboard for the users belonging to this group. | +
Default markings |
+In Settings > Customization > Entity types , if default marking definitions is enabled, default markings of the group is used. |
+
Allowed markings |
+Grant access to the group to the defined marking definitions, more details in data segregation. | +
Triggers and digests |
+Define defaults triggers and digests for the users belonging to this group. | +
When managing a group, you can define the members and all above configurations.
+ +Users can belong to organizations, which is an additional layer of data segregation and customization.
+Plateform administrators can promote members of an organization as "Organization administrator". This elevated role grants them the necessary capabilities to create, edit and delete users from the corresponding Organization. Additionally, administrators have the flexibility to define a list of groups that can be granted to newly created members by the organization administrators. This feature simplifies the process of granting appropriate access and privileges to individuals joining the organization.
+ +The platform administrator can promote/demote an organization admin through its user edition form.
+ +The "Organization admin" has restricted access to Settings. They can only manage the members of the organizations for which they have been promoted as "admins".
+ + + + + + + + + + + + + + + + + + +OpenCTI supports several authentication providers. If you configure multiple strategies, they will be tested in the order you declared them.
+Activation
+You need to configure/activate only that you really want to propose to your users in term of authentication
+The product proposes two kind of authentication strategy:
+Under the hood we technically use the strategies provided by PassportJS. We integrate a subset of the strategies available with passport we if you need more we can theatrically integrate all the passport strategies.
+This strategy used the OpenCTI database as user management.
+OpenCTI use this strategy as the default but its not the one we recommend for security reason.
+ +Production deployment
+Please use the LDAP/Auth0/OpenID/SAML strategy for production deployment.
+This strategy can be used to authenticate your user with your company LDAP and is based on Passport - LDAPAuth.
+"ldap": {
+ "strategy": "LdapStrategy",
+ "config": {
+ "url": "ldaps://mydc.domain.com:686",
+ "bind_dn": "cn=Administrator,cn=Users,dc=mydomain,dc=com",
+ "bind_credentials": "MY_STRONG_PASSWORD",
+ "search_base": "cn=Users,dc=mydomain,dc=com",
+ "search_filter": "(cn={{username}})",
+ "mail_attribute": "mail",
+ // "account_attribute": "givenName",
+ // "firstname_attribute": "cn",
+ // "lastname_attribute": "cn",
+ "account_attrgroup_search_filteribute": "givenName",
+ "allow_self_signed": true
+ }
+}
+
If you would like to use LDAP groups to automatically associate LDAP groups and OpenCTI groups/organizations:
+"ldap": {
+ "config": {
+ ...
+ "group_search_base": "cn=Groups,dc=mydomain,dc=com",
+ "group_search_filter": "(member={{dn}})",
+ "groups_management": { // To map LDAP Groups to OpenCTI Groups
+ "group_attribute": "cn",
+ "groups_mapping": ["LDAP_Group_1:OpenCTI_Group_1", "LDAP_Group_2:OpenCTI_Group_2", ...]
+ },
+ "organizations_management": { // To map LDAP Groups to OpenCTI Organizations
+ "organizations_path": "cn",
+ "organizations_mapping": ["LDAP_Group_1:OpenCTI_Organization_1", "LDAP_Group_2:OpenCTI_Organization_2", ...]
+ }
+ }
+}
+
This strategy can be used to authenticate your user with your company SAML and is based on Passport - SAML.
+"saml": {
+ "identifier": "saml",
+ "strategy": "SamlStrategy",
+ "config": {
+ "issuer": "mytestsaml",
+ // "account_attribute": "nameID",
+ // "firstname_attribute": "nameID",
+ // "lastname_attribute": "nameID",
+ "entry_point": "https://auth.mydomain.com/auth/realms/mydomain/protocol/saml",
+ "saml_callback_url": "http://localhost:4000/auth/saml/callback",
+ // "private_key": "MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwg...",
+ "cert": "MIICmzCCAYMCBgF2Qt3X1zANBgkqhkiG9w0BAQsFADARMQ8w...",
+ "logout_remote": false
+ }
+}
+
For the SAML strategy to work:
+cert
parameter is mandatory (PEM format) because it is used to validate the SAML response.private_key
(PEM format) is optional and is only required if you want to sign the SAML client request.Certificates
+Be careful to put the cert
/ private_key
key in PEM format. Indeed, a lot of systems generally export the the keys in X509 / PCKS12 formats and so you will need to convert them.
+Here is an example to extract PEM from PCKS12:
+
Here is an example of SAML configuration using environment variables:
+- PROVIDERS__SAML__STRATEGY=SamlStrategy
+- "PROVIDERS__SAML__CONFIG__LABEL=Login with SAML"
+- PROVIDERS__SAML__CONFIG__ISSUER=mydomain
+- PROVIDERS__SAML__CONFIG__ENTRY_POINT=https://auth.mydomain.com/auth/realms/mydomain/protocol/saml
+- PROVIDERS__SAML__CONFIG__SAML_CALLBACK_URL=http://opencti.mydomain.com/auth/saml/callback
+- PROVIDERS__SAML__CONFIG__CERT=MIICmzCCAYMCBgF3Rt3X1zANBgkqhkiG9w0BAQsFADARMQ8w
+- PROVIDERS__SAML__CONFIG__LOGOUT_REMOTE=false
+
OpenCTI support mapping SAML Roles/Groups on OpenCTI Groups. Here is an example:
+"saml": {
+ "config": {
+ ...,
+ // Groups mapping
+ "groups_management": { // To map SAML Groups to OpenCTI Groups
+ "group_attributes": ["Group"],
+ "groups_mapping": ["SAML_Group_1:OpenCTI_Group_1", "SAML_Group_2:OpenCTI_Group_2", ...]
+ },
+ "groups_management": { // To map SAML Roles to OpenCTI Groups
+ "group_attributes": ["Role"],
+ "groups_mapping": ["SAML_Role_1:OpenCTI_Group_1", "SAML_Role_2:OpenCTI_Group_2", ...]
+ },
+ // Organizations mapping
+ "organizations_management": { // To map SAML Groups to OpenCTI Organizations
+ "organizations_path": ["Group"],
+ "organizations_mapping": ["SAML_Group_1:OpenCTI_Organization_1", "SAML_Group_2:OpenCTI_Organization_2", ...]
+ },
+ "organizations_management": { // To map SAML Roles to OpenCTI Organizations
+ "organizations_path": ["Role"],
+ "organizations_mapping": ["SAML_Role_1:OpenCTI_Organization_1", "SAML_Role_2:OpenCTI_Organization_2", ...]
+ }
+ }
+}
+
Here is an example of SAML Groups mapping configuration using environment variables:
+- "PROVIDERS__SAML__CONFIG__GROUPS_MANAGEMENT__GROUP_ATTRIBUTES=[\"Group\"]"
+- "PROVIDERS__SAML__CONFIG__GROUPS_MANAGEMENT__GROUPS_MAPPING=[\"SAML_Group_1:OpenCTI_Group_1\", \"SAML_Group_2:OpenCTI_Group_2\", ...]"
+
This strategy allows to use Auth0 Service to handle the authentication and is based on Passport - Auth0.
+"authzero": {
+ "identifier": "auth0",
+ "strategy": "Auth0Strategy",
+ "config": {
+ "clientID": "XXXXXXXXXXXXXXXXXX",
+ "baseURL": "https://opencti.mydomain.com",
+ "clientSecret": "XXXXXXXXXXXXXXXXXX",
+ "callback_url": "https://opencti.mydomain.com/auth/auth0/callback",
+ "domain": "mycompany.eu.auth0.com",
+ "audience": "XXXXXXXXXXXXXXX",
+ "scope": "openid email profile XXXXXXXXXXXXXXX",
+ "logout_remote": false
+ }
+}
+
Here is an example of Auth0 configuration using environment variables:
+- PROVIDERS__AUTHZERO__STRATEGY=Auth0Strategy
+- PROVIDERS__AUTHZERO__CONFIG__CLIENT_ID=${AUTH0_CLIENT_ID}
+- PROVIDERS__AUTHZERO__CONFIG__BASEURL=${AUTH0_BASE_URL}
+- PROVIDERS__AUTHZERO__CONFIG__CLIENT_SECRET=${AUTH0_CLIENT_SECRET}
+- PROVIDERS__AUTHZERO__CONFIG__CALLBACK_URL=${AUTH0_CALLBACK_URL}
+- PROVIDERS__AUTHZERO__CONFIG__DOMAIN=${AUTH0_DOMAIN}
+- PROVIDERS__AUTHZERO__CONFIG__SCOPE="openid email profile"
+- PROVIDERS__AUTHZERO__CONFIG__LOGOUT_REMOTE=false
+
This strategy allows to use the OpenID Connect Protocol to handle the authentication and is based on Node OpenID Client that is more powerful than the passport one.
+"oic": {
+ "identifier": "oic",
+ "strategy": "OpenIDConnectStrategy",
+ "config": {
+ "label": "Login with OpenID",
+ "issuer": "https://auth.mydomain.com/auth/realms/mydomain",
+ "client_id": "XXXXXXXXXXXXXXXXXX",
+ "client_secret": "XXXXXXXXXXXXXXXXXX",
+ "redirect_uris": ["https://opencti.mydomain.com/auth/oic/callback"],
+ "logout_remote": false
+ }
+}
+
Here is an example of OpenID configuration using environment variables:
+- PROVIDERS__OPENID__STRATEGY=OpenIDConnectStrategy
+- "PROVIDERS__OPENID__CONFIG__LABEL=Login with OpenID"
+- PROVIDERS__OPENID__CONFIG__ISSUER=https://auth.mydomain.com/auth/realms/xxxx
+- PROVIDERS__OPENID__CONFIG__CLIENT_ID=XXXXXXXXXXXXXXXXXX
+- PROVIDERS__OPENID__CONFIG__CLIENT_SECRET=XXXXXXXXXXXXXXXXXX
+- "PROVIDERS__OPENID__CONFIG__REDIRECT_URIS=[\"https://opencti.mydomain.com/auth/oic/callback\"]"
+- PROVIDERS__OPENID__CONFIG__LOGOUT_REMOTE=false
+
OpenCTI support mapping OpenID Roles/Groups on OpenCTI Groups (everything is tied to a group in the platform). Here is an example:
+"oic": {
+ "config": {
+ ...,
+ // Groups mapping
+ "groups_management": { // To map OpenID Groups to OpenCTI Groups
+ "groups_scope": "groups",
+ "groups_path": ["groups", "realm_access.groups", "resource_access.account.groups"],
+ "groups_mapping": ["OpenID_Group_1:OpenCTI_Group_1", "OpenID_Group_2:OpenCTI_Group_2", ...]
+ },
+ "groups_management": { // To map OpenID Roles to OpenCTI Groups
+ "groups_scope": "roles",
+ "groups_path": ["roles", "realm_access.roles", "resource_access.account.roles"],
+ "groups_mapping": ["OpenID_Role_1:OpenCTI_Group_1", "OpenID_Role_2:OpenCTI_Group_2", ...]
+ },
+ // Organizations mapping
+ "organizations_management": { // To map OpenID Groups to OpenCTI Organizations
+ "organizations_scope": "groups",
+ "organizations_path": ["groups", "realm_access.groups", "resource_access.account.groups"],
+ "organizations_mapping": ["OpenID_Group_1:OpenCTI_Group_1", "OpenID_Group_2:OpenCTI_Group_2", ...]
+ },
+ "organizations_management": { // To map OpenID Roles to OpenCTI Organizations
+ "organizations_scope": "roles",
+ "organizations_path": ["roles", "realm_access.roles", "resource_access.account.roles"],
+ "organizations_mapping": ["OpenID_Role_1:OpenCTI_Group_1", "OpenID_Role_2:OpenCTI_Group_2", ...]
+ },
+ }
+}
+
Here is an example of OpenID Groups mapping configuration using environment variables:
+- "PROVIDERS__OPENID__CONFIG__GROUPS_MANAGEMENT__GROUPS_SCOPE=groups"
+- "PROVIDERS__OPENID__CONFIG__GROUPS_MANAGEMENT__GROUPS_PATH=[\"groups\", \"realm_access.groups\", \"resource_access.account.groups\"]"
+- "PROVIDERS__OPENID__CONFIG__GROUPS_MANAGEMENT__GROUPS_MAPPING=[\"OpenID_Group_1:OpenCTI_Group_1\", \"OpenID_Group_2:OpenCTI_Group_2\", ...]"
+
This strategy can authenticate your users with Facebook and is based on Passport - Facebook
+"facebook": {
+ "identifier": "facebook",
+ "strategy": "FacebookStrategy",
+ "config": {
+ "client_id": "XXXXXXXXXXXXXXXXXX",
+ "client_secret": "XXXXXXXXXXXXXXXXXX",
+ "callback_url": "https://opencti.mydomain.com/auth/facebook/callback",
+ "logout_remote": false
+ }
+}
+
This strategy can authenticate your users with Google and is based on Passport - Google
+"google": {
+ "identifier": "google",
+ "strategy": "GoogleStrategy",
+ "config": {
+ "client_id": "XXXXXXXXXXXXXXXXXX",
+ "client_secret": "XXXXXXXXXXXXXXXXXX",
+ "callback_url": "https://opencti.mydomain.com/auth/google/callback",
+ "logout_remote": false
+ }
+}
+
This strategy can authenticate your users with GitHub and is based on Passport - GitHub
+"github": {
+ "identifier": "github",
+ "strategy": "GithubStrategy",
+ "config": {
+ "client_id": "XXXXXXXXXXXXXXXXXX",
+ "client_secret": "XXXXXXXXXXXXXXXXXX",
+ "callback_url": "https://opencti.mydomain.com/auth/github/callback",
+ "logout_remote": false
+ }
+}
+
This strategy can authenticate a user based on SSL client certificates. For this you need to configure your OCTI to start in HTTPS, for example:
+"port": 443,
+"https_cert": {
+ "key": "/cert/server_key.pem",
+ "crt": "/cert/server_cert.pem",
+ "reject_unauthorized": true
+}
+
And then add the ClientCertStrategy
:
Then when accessing for the first time OCTI, the browser will ask for the certificate you want to use.
+The variable auto_create_group can be added in the options of some strategies (LDAP, SAML and OpenID). If this variable is true, the groups of a user that logins will automatically be created if they don’t exist.
+More precisely, if the user that tries to authenticate has groups that don’t exist in OpenCTI but exist in the SSO configuration, there are two cases:
+We assum that Group1 exists in the platform, and newGroup doesn’t exist. The user that tries to log in has the group newGroup. If auto_create_group = true in the SSO configuration, the group named newGroup will be created at the platform initialization and the user will be mapped on it. If auto_create_group = false or is undefined, the user can’t login and an error is raised.
+"groups_management": {
+ "group_attribute": "cn",
+ "groups_mapping": ["SSO_GROUP_NAME1:group1", "SSO_GROUP_NAME_2:newGroup", ...]
+},
+"auto_create_group": true
+
In this example the users have a login form and need to enter login and password. The authentication is done on LDAP first, then locally if user failed to authenticate and finally fail if none of them succeded. Here is an example for the production.json
file:
"providers": {
+ "ldap": {
+ "strategy": "LdapStrategy",
+ "config": {
+ "url": "ldaps://mydc.mydomain.com:636",
+ "bind_dn": "cn=Administrator,cn=Users,dc=mydomain,dc=com",
+ "bind_credentials": "MY_STRONG_PASSWORD",
+ "search_base": "cn=Users,dc=mydomain,dc=com",
+ "search_filter": "(cn={{username}})",
+ "mail_attribute": "mail",
+ "account_attribute": "givenName"
+ }
+ },
+ "local": {
+ "strategy": "LocalStrategy",
+ "config": {
+ "disabled": false
+ }
+ }
+}
+
If you use a container deployment, here is an example using environment variables:
+- PROVIDERS__LDAP__STRATEGY=LdapStrategy
+- PROVIDERS__LDAP__CONFIG__URL=ldaps://mydc.mydomain.org:636
+- PROVIDERS__LDAP__CONFIG__BIND_DN=cn=Administrator,cn=Users,dc=mydomain,dc=com
+- PROVIDERS__LDAP__CONFIG__BIND_CREDENTIALS=XXXXXXXXXX
+- PROVIDERS__LDAP__CONFIG__SEARCH_BASE=cn=Users,dc=mydomain,dc=com
+- PROVIDERS__LDAP__CONFIG__SEARCH_FILTER=(cn={{username}})
+- PROVIDERS__LDAP__CONFIG__MAIL_ATTRIBUTE=mail
+- PROVIDERS__LDAP__CONFIG__ACCOUNT_ATTRIBUTE=givenName
+- PROVIDERS__LDAP__CONFIG__ALLOW_SELF_SIGNED=true
+- PROVIDERS__LOCAL__STRATEGY=LocalStrategy
+
The OpenCTI platform technological stack has been designed to be able to scale horizontally. All dependencies such as Elastic or Redis can be deployed in cluster mode and performances can be drastically increased by deploying multiple platform and worker instances.
+Here is the high level architecture for customers and Filigran cloud platform to ensure both high availability and throughput.
+ +In the ElasticSearch configuration of OpenCTI, it is possible to declare all nodes.
+ +Compatibility
+OpenCTI is also compatible with OpenSearch and AWS / GCP / Azure native search services based on the ElasticSearch query language.
+Redis should be turned to cluster mode:
+ +Compatibility
+OpenCTI is also compatible with ElastiCache, MemoryStore and AWS / GCP / Azure native services based on the Redis protocol.
+For the RabbitMQ cluster, you will need a TCP load balancer on top of the nodes since the configuration does not support multi-nodes for now:
+ +Compatibility
+OpenCTI is also compatible with Amazon MQ, CloudAMQP and AWS / GCP / Azure native services based on the AMQP protocol.
+MinIO is an open source server able to serve S3 buckets. It can be deployed in cluster mode and is compatible with several storage backend. OpenCTI is compatible with any tool following the S3 standard.
+As showed on the schema, best practices for cluster mode and to avoid any congestion in the technological stack are:
+When enabling clustering, the number of nodes is displayed in Settings > Parameters.
+ +Also, since some managers like the rule engine, the task manager and the notification manager can take some resources in the OpenCTI NodeJS process, it is highly recommended to disable them in the frontend cluster. OpenCTI automatically handle the distribution and the launching of the engines across all nodes in the cluster except where they are explicitely disabled in the configuration.
+ + + + + + + + + + + + + + + + + + +The purpose of this section is to learn how to configure OpenCTI to have it tailored for your production and development needs.
+Here are the configuration keys, for both containers (environment variables) and manual deployment.
+Parameters equivalence
+The equivalent of a config variable in environment variables is the usage of a double underscores (__
) for a level of config.
For example: +
+will become: +
+If you need to put a list of elements for the key, it must have a special formatting. Here is an example for redirect URIs for OpenID config: +
+Parameter | +Environment variable | +Default value | +Description | +
---|---|---|---|
app:port | +APP__PORT | +4000 | +Listen port of the application | +
app:base_path | +APP__BASE_PATH | ++ | Specific URI (ie. /opencti) | +
app:base_url | +APP__BASE_URL | +http://localhost:4000 | +Full URL of the platform (should include the base_path if any) |
+
app:request_timeout | +APP__REQUEST_TIMEOUT | +1200000 | +Request timeout, in ms (default 20 minutes) | +
app:session_timeout | +APP__SESSION_TIMEOUT | +0 | +Session timeout, in ms (default 0 minute - disabled) | +
app:session_idle_timeout | +APP__SESSION_IDLE_TIMEOUT | +1200000 | +Idle timeout, in ms (default 20 minutes) | +
app:session_cookie | +APP__SESSION_COOKIE | +false | +Use memory/session cookie instead of persistent one | +
app:admin:email | +APP__ADMIN__EMAIL | +admin@opencti.io | +Default login email of the admin user | +
app:admin:password | +APP__ADMIN__PASSWORD | +ChangeMe | +Default password of the admin user | +
app:admin:token | +APP__ADMIN__TOKEN | +ChangeMe | +Default token (must be a valid UUIDv4) | +
Parameter | +Environment variable | +Default value | +Description | +
---|---|---|---|
http_proxy | +HTTP_PROXY | ++ | Proxy URL for HTTP connection (example: http://proxy:8O080) | +
https_proxy | +HTTPS_PROXY | ++ | Proxy URL for HTTPS connection (example: http://proxy:8O080) | +
no_proxy | +NO_PROXY | ++ | Comma separated list of hostnames for proxy exception (example: localhost,127.0.0.0/8,internal.opencti.io) | +
app:https_cert:cookie_secure | +APP__HTTPS_CERT__COOKIE_SECURE | +false | +Set the flag "secure" for session cookies. | +
app:https_cert:ca | +APP__HTTPS_CERT__CA | +Empty list [] | +Certificate authority paths or content, only if the client uses a self-signed certificate. | +
app:https_cert:key | +APP__HTTPS_CERT__KEY | ++ | Certificate key path or content | +
app:https_cert:crt | +APP__HTTPS_CERT__CRT | ++ | Certificate crt path or content | +
app:https_cert:reject_unauthorized | +APP__HTTPS_CERT__REJECT_UNAUTHORIZED | ++ | If not false, the server certificate is verified against the list of supplied CAs | +
Parameter | +Environment variable | +Default value | +Description | +
---|---|---|---|
app:app_logs:logs_level | +APP__APP_LOGS__LOGS_LEVEL | +info | +The application log level | +
app:app_logs:logs_files | +APP__APP_LOGS__LOGS_FILES | +true |
+If application logs is logged into files | +
app:app_logs:logs_console | +APP__APP_LOGS__LOGS_CONSOLE | +true |
+If application logs is logged to console (useful for containers) | +
app:app_logs:logs_max_files | +APP__APP_LOGS__LOGS_MAX_FILES | +7 | +Maximum number of daily files in logs | +
app:app_logs:logs_directory | +APP__APP_LOGS__LOGS_DIRECTORY | +./logs | +File logs directory | +
Parameter | +Environment variable | +Default value | +Description | +
---|---|---|---|
app:audit_logs:logs_files | +APP__AUDIT_LOGS__LOGS_FILES | +true |
+If audit logs is logged into files | +
app:audit_logs:logs_console | +APP__AUDIT_LOGS__LOGS_CONSOLE | +true |
+If audit logs is logged to console (useful for containers) | +
app:audit_logs:logs_max_files | +APP__AUDIT_LOGS__LOGS_MAX_FILES | +7 | +Maximum number of daily files in logs | +
app:audit_logs:logs_directory | +APP__AUDIT_LOGS__LOGS_DIRECTORY | +./logs | +Audit logs directory | +
Parameter | +Environment variable | +Default value | +Description | +
---|---|---|---|
app:map_tile_server_dark | +APP__MAP_TILE_SERVER_DARK | +https://map.opencti.io/styles/luatix-dark/{z}/{x}/{y}.png | +The address of the OpenStreetMap provider with dark theme style | +
app:map_tile_server_light | +APP__MAP_TILE_SERVER_LIGHT | +https://map.opencti.io/styles/luatix-light/{z}/{x}/{y}.png | +The address of the OpenStreetMap provider with light theme style | +
app:reference_attachment | +APP__REFERENCE_ATTACHMENT | +false |
+External reference mandatory attachment | +
Parameter | +Environment variable | +Default value | +Description | +
---|---|---|---|
app:graphql:playground:enabled | +APP__GRAPHQL__PLAYGROUND__ENABLED | +true |
+Enable the playground on /graphql | +
app:graphql:playground:force_disabled_introspection | +APP__GRAPHQL_PLAYGROUND__FORCE_DISABLED_INTROSPECTION | +false |
+Introspection is allowed to auth users but can be disabled in needed | +
app:concurrency:retry_count | +APP__CONCURRENCY__RETRY_COUNT | +200 | +Number of try to get the lock to work an element (create/update/merge, ...) | +
app:concurrency:retry_delay | +APP__CONCURRENCY__RETRY_DELAY | +100 | +Delay between 2 lock retry (in milliseconds) | +
app:concurrency:retry_jitter | +APP__CONCURRENCY__RETRY_JITTER | +50 | +Random jitter to prevent concurrent retry (in milliseconds) | +
app:concurrency:max_ttl | +APP__CONCURRENCY__MAX_TTL | +30000 | +Global maximum time for lock retry (in milliseconds) | +
Parameter | +Environment variable | +Default value | +Description | +
---|---|---|---|
elasticsearch:engine_selector | +ELASTICSEARCH__ENGINE_SELECTOR | +auto | +elk or opensearch , default is auto , please put elk if you use token auth. |
+
elasticsearch:url | +ELASTICSEARCH__URL | +http://localhost:9200 | +URL(s) of the ElasticSearch (supports http://user:pass@localhost:9200 and list of URLs) | +
elasticsearch:username | +ELASTICSEARCH__USERNAME | ++ | Username can be put in the URL or with this parameter | +
elasticsearch:password | +ELASTICSEARCH__PASSWORD | ++ | Password can be put in the URL or with this parameter | +
elasticsearch:api_key | +ELASTICSEARCH__API_KEY | ++ | API key for ElasticSearch token auth. Please set also engine_selector to elk |
+
elasticsearch:index_prefix | +ELASTICSEARCH__INDEX_PREFIX | +opencti | +Prefix for the indices | +
elasticsearch:ssl:reject_unauthorized | +ELASTICSEARCH__SSL__REJECT_UNAUTHORIZED | +true |
+Enable TLS certificate check | +
elasticsearch:ssl:ca | +ELASTICSEARCH__SSL__CA | ++ | Custom certificate path or content | +
elasticsearch:ssl:ca_plain (depecated) | +ELASTICSEARCH__SSL__CA_PLAIN | ++ | @depecated, use ca directly | +
Parameter | +Environment variable | +Default value | +Description | +
---|---|---|---|
redis:mode | +REDIS__MODE | +single | +Connect to redis "single" or "cluster" | +
redis:namespace | +REDIS__NAMESPACE | ++ | Namespace (to use as prefix) | +
redis:hostname | +REDIS__HOSTNAME | +localhost | +Hostname of the Redis Server | +
redis:hostnames | +REDIS__HOSTNAMES | ++ | Hostnames definition for Redis cluster mode: a list of host/port objects. | +
redis:port | +REDIS__PORT | +6379 | +Port of the Redis Server | +
redis:use_ssl | +REDIS__USE_SSL | +false |
+Is the Redis Server has TLS enabled | +
redis:username | +REDIS__USERNAME | ++ | Username of the Redis Server | +
redis:password | +REDIS__PASSWORD | ++ | Password of the Redis Server | +
redis:ca | +REDIS__CA | +[} | +List of path(s) of the CA certificate(s) | +
redis:trimming | +REDIS__TRIMMING | +2000000 | +Number of elements to maintain in the stream. (0 = unlimited) | +
Parameter | +Environment variable | +Default value | +Description | +
---|---|---|---|
rabbitmq:hostname | +RABBITMQ__HOSTNAME | +localhost | +Hostname of the RabbitMQ server | +
rabbitmq:port | +RABBITMQ__PORT | +5672 | +Port of the RabbitMQ server | +
rabbitmq:port_management | +RABBITMQ__PORT_MANAGEMENT | +15672 | +Port of the RabbitMQ Management Plugin | +
rabbitmq:username | +RABBITMQ__USERNAME | +guest | +RabbitMQ user | +
rabbitmq:password | +RABBITMQ__PASSWORD | +guest | +RabbitMQ password | +
rabbitmq:queue_type | +RABBITMQ__QUEUE_TYPE | +"classic" | +RabbitMQ Queue Type ("classic" or "quorum") | +
- | +- | +- | +- | +
rabbitmq:use_ssl | +RABBITMQ__USE_SSL | +false |
+Use TLS connection | +
rabbitmq:use_ssl_cert | +RABBITMQ__USE_SSL_CERT | ++ | Path or cert content | +
rabbitmq:use_ssl_key | +RABBITMQ__USE_SSL_KEY | ++ | Path or key content | +
rabbitmq:use_ssl_pfx | +RABBITMQ__USE_SSL_PFX | ++ | Path or pfx content | +
rabbitmq:use_ssl_ca | +RABBITMQ__USE_SSL_CA | ++ | Path or cacert content | +
rabbitmq:use_ssl_passphrase | +RABBITMQ__SSL_PASSPHRASE | ++ | Passphrase for the key certificate | +
rabbitmq:use_ssl_reject_unauthorized | +RABBITMQ__SSL_REJECT_UNAUTHORIZED | +false |
+Reject rabbit self signed certificate | +
- | +- | +- | +- | +
rabbitmq:management_ssl | +RABBITMQ__MANAGEMENT_SSL | +false |
+Is the Management Plugin has TLS enabled | +
rabbitmq:management_ssl_reject_unauthorized | +RABBITMQ__SSL_REJECT_UNAUTHORIZED | +true |
+Reject management self signed certificate | +
Parameter | +Environment variable | +Default value | +Description | +
---|---|---|---|
minio:endpoint | +MINIO__ENDPOINT | +localhost | +Hostname of the S3 Service | +
minio:port | +MINIO__PORT | +9000 | +Port of the S3 Service | +
minio:use_ssl | +MINIO__USE_SSL | +false |
+Is the S3 Service has TLS enabled | +
minio:access_key | +MINIO__ACCESS_KEY | +ChangeMe | +The S3 Service access key | +
minio:secret_key | +MINIO__SECRET_KEY | +ChangeMe | +The S3 Service secret key | +
minio:bucket_name | +MINIO__BUCKET_NAME | +opencti-bucket | +The S3 bucket name (useful to change if you use AWS) | +
minio:bucket_region | +MINIO__BUCKET_REGION | +us-east-1 | +The S3 bucket region if you use AWS | +
minio:use_aws_role | +MINIO__USE_AWS_ROLE | +false |
+To use AWS role auto credentials | +
Parameter | +Environment variable | +Default value | +Description | +
---|---|---|---|
smtp:hostname | +SMTP__HOSTNAME | ++ | SMTP Server hostname | +
smtp:port | +SMTP__PORT | +9000 | +SMTP Port (25 or 465 for TLS) | +
smtp:use_ssl | +SMTP__USE_SSL | +false |
+SMTP over TLS | +
smtp:reject_unauthorized | +SMTP__REJECT_UNAUTHORIZED | +false |
+Enable TLS certificate check | +
smtp:username | +SMTP__USERNAME | ++ | SMTP Username if authentication is needed | +
smtp:password | +SMTP__PASSWORD | ++ | SMTP Password if authentication is needed | +
Parameter | +Environment variable | +Default value | +Description | +
---|---|---|---|
rule_engine:enabled | +RULE_ENGINE__ENABLED | +true |
+Enable/disable the rule engine | +
rule_engine:lock_key | +RULE_ENGINE__LOCK_KEY | +rule_engine_lock | +Lock key of the engine in Redis | +
- | +- | +- | +- | +
history_manager:enabled | +HISTORY_MANAGER__ENABLED | +true |
+Enable/disable the history manager | +
history_manager:lock_key | +HISTORY_MANAGER__LOCK_KEY | +history_manager_lock | +Lock key for the manager in Redis | +
- | +- | +- | +- | +
task_scheduler:enabled | +TASK_SCHEDULER__ENABLED | +true |
+Enable/disable the task scheduler | +
task_scheduler:lock_key | +TASK_SCHEDULER__LOCK_KEY | +task_manager_lock | +Lock key for the scheduler in Redis | +
task_scheduler:interval | +TASK_SCHEDULER__INTERVAL | +10000 | +Interval to check new task to do (in ms) | +
- | +- | +- | +- | +
sync_manager:enabled | +SYNC_MANAGER__ENABLED | +true |
+Enable/disable the sync manager | +
sync_manager:lock_key | +SYNC_MANAGER__LOCK_KEY | +sync_manager_lock | +Lock key for the manager in Redis | +
sync_manager:interval | +SYNC_MANAGER__INTERVAL | +10000 | +Interval to check new sync feeds to consume (in ms) | +
- | +- | +- | +- | +
expiration_scheduler:enabled | +EXPIRATION_SCHEDULER__ENABLED | +true |
+Enable/disable the scheduler | +
expiration_scheduler:lock_key | +EXPIRATION_SCHEDULER__LOCK_KEY | +expired_manager_lock | +Lock key for the scheduler in Redis | +
expiration_scheduler:interval | +EXPIRATION_SCHEDULER__INTERVAL | +300000 | +Interval to check expired indicators (in ms) | +
- | +- | +- | +- | +
retention_manager:enabled | +RETENTION_MANAGER__ENABLED | +true |
+Enable/disable the retention manager | +
retention_manager:lock_key | +RETENTION_MANAGER__LOCK_KEY | +retention_manager_lock | +Lock key for the manager in Redis | +
retention_manager:interval | +RETENTION_MANAGER__INTERVAL | +60000 | +Interval to check items to be deleted (in ms) | +
- | +- | +- | +- | +
notification_manager:enabled | +NOTIFICATION_MANAGER__ENABLED | +true |
+Enable/disable the notification manager | +
notification_manager:lock_key | +NOTIFICATION_MANAGER__LOCK_KEY | +notification_manager_lock | +Lock key for the manager in Redis | +
notification_manager:interval | +NOTIFICATION_MANAGER__INTERVAL | +10000 | +Interval to push notifications | +
- | +- | +- | +- | +
publisher_manager:enabled | +PUBLISHER_MANAGER__ENABLED | +true |
+Enable/disable the publisher manager | +
publisher_manager:lock_key | +PUBLISHER_MANAGER__LOCK_KEY | +publisher_manager_lock | +Lock key for the manager in Redis | +
publisher_manager:interval | +PUBLISHER_MANAGER__INTERVAL | +10000 | +Interval to send notifications / digests (in ms) | +
- | +- | +- | +- | +
ingestion_manager:enabled | +INGESTION_MANAGER__ENABLED | +true |
+Enable/disable the ingestion manager | +
ingestion_manager:lock_key | +INGESTION_MANAGER__LOCK_KEY | +ingestion_manager_lock | +Lock key for the manager in Redis | +
ingestion_manager:interval | +INGESTION_MANAGER__INTERVAL | +300000 | +Interval to check for new data in remote feeds | +
- | +- | +- | +- | +
playbook_manager:enabled | +PLAYBOOK_MANAGER__ENABLED | +true |
+Enable/disable the playbook manager | +
playbook_manager:lock_key | +PLAYBOOK_MANAGER__LOCK_KEY | +publisher_manager_lock | +Lock key for the manager in Redis | +
playbook_manager:interval | +PLAYBOOK_MANAGER__INTERVAL | +60000 | +Interval to check new playbooks | +
Default file
+It is possible to check all default parameters implemented in the platform in the default.json
file.
Can be configured manually using the configuration file config.yml
or through environment variables.
Parameter | +Environment variable | +Default value | +Description | +
---|---|---|---|
opencti:url | +OPENCTI_URL | ++ | The URL of the OpenCTI platform | +
opencti:token | +OPENCTI_TOKEN | ++ | A token of an administrator account with bypass capability | +
- | +- | +- | +- | +
mq:use_ssl | +/ | +/ | +Depending of the API configuration (fetch from API) | +
mq:use_ssl_ca | +MQ_USE_SSL_CA | ++ | Path or cacert content | +
mq:use_ssl_cert | +MQ_USE_SSL_CERT | ++ | Path or cert content | +
mq:use_ssl_key | +MQ_USE_SSL_KEY | ++ | Path or key content | +
mq:use_ssl_passphrase | +MQ_USE_SSL_PASSPHRASE | ++ | Passphrase for the key certificate | +
mq:use_ssl_reject_unauthorized | +MQ_USE_SSL_REJECT_UNAUTHORIZED | +false |
+Reject rabbit self signed certificate | +
Parameter | +Environment variable | +Default value | +Description | +
---|---|---|---|
worker:log_level | +WORKER_LOG_LEVEL | +info | +The log level (error, warning, info or debug) | +
For specific connector configuration, you need to check each connector behavior.
+If you want to adapt the memory consumption of ElasticSearch, you can use theses options:
+ +This can be done in configuration file in the jvm.conf
file.
Connectors list
+You are looking for the available connectors? The list is in the OpenCTI Ecosystem.
+Connectors are the cornerstone of the OpenCTI platform and allow organizations to easily ingest, enrich or export data in the platform. According to their functionality and use case, they are categorized in following classes.
+ +These connectors automatically retrieve information from an external organization, application or service, convert it to STIX 2.1 bundles and import it into OpenCTI using the workers.
+When a new object is created in the platform or on the user request, it is possible to trigger the internal enrichment connector to lookup and/or search the object in external organizations, applications or services. If the object is found, the connectors will generate a STIX 2.1 bundle which will increase the level of knowledge about the concerned object.
+ +These connectors connect to a platform data stream and continously do something with the received events. In most cases, they are used to consume OpenCTI data and insert them in third-party platforms such as SIEMs, XDRs, EDRS, etc. In some cases, stream connectors can also query the external system on a regular basis and act as import connector for instance to gather alerts and sightings related to CTI data and push them to OpenCTI (bi-directional).
+Information from an uploaded file can be extracted and ingested into OpenCTI. Examples are files attached to a report or a STIX 2.1 file.
+Information stored in OpenCTI can be extracted into different file formats like .csv or .json (STIX 2).
+All connectors have to be able to access to the OpenCTI API. To allow this connection, they have 2 mandatory configuration parameters, the OPENCTI_URL
and the OPENCTI_TOKEN
. In addition of these 2 parameters, connectors have other mandatory parameters that need to be set in order to get them work.
Connectors tokens
+Be careful, we strongly recommend to use a dedicated token for each connector running in the platform. So you have to create a specific user for each of them.
+Also, if all connectors users can run in with a user belonging to the Connectors
group (with the Connector
role), the Internal Export Files
should be run with a user who is Administrator (with bypass capability) because they imperstonate the user requesting the export to avoid data leak.
Type | +Required role | +Used permissions | +
---|---|---|
EXTERNAL_IMPORT | +Connector | +Import data with the connector user. | +
INTERNAL_ENRICHMENT | +Connector | +Enrich data with the connector user. | +
INTERNAL_IMPORT_FILE | +Connector | +Import data with the connector user. | +
INTERNAL_EXPORT_FILE | +Administrator | +Export data with the user who requested the export. | +
STREAM | +Connector | +Consume the streams the connector user. | +
Here is an example of a connector docker-compose.yml
file:
+
- CONNECTOR_ID=ChangeMe
+- CONNECTOR_TYPE=EXTERNAL_IMPORT
+- CONNECTOR_NAME=MITRE ATT&CK
+- CONNECTOR_SCOPE=identity,attack-pattern,course-of-action,intrusion-set,malware,tool,report
+- CONNECTOR_CONFIDENCE_LEVEL=3
+- CONNECTOR_UPDATE_EXISTING_DATA=true
+- CONNECTOR_LOG_LEVEL=info
+
Here is an example in a connector config.yml
file:
-connector:
+ id: 'ChangeMe'
+ type: 'EXTERNAL_IMPORT'
+ name: 'MITRE ATT&CK'
+ scope: 'identity,attack-pattern,course-of-action,intrusion-set,malware,tool,report'
+ confidence_level: 3
+ update_existing_data: true
+ log_level: 'info'
+
Be aware that all connectors are reaching RabbitMQ based the RabbitMQ configuration provided by the OpenCTI platform. The connector must be able to reach RabbitMQ on the specified hostname and port. If you have a specific Docker network configuration, please be sure to adapt your docker-compose.yml
file in such way that the connector container gets attached to the OpenCTI Network, e.g.:
As mentioned previously, it is strongly recommended to run each connector with its own user. The Internal Export File
connectors should be launched with a user that belongs to a group which has an “Administrator” role (with bypass all capabilities enabled).
By default, in platform, a group named "Connectors" already exists. So just create a new user with the name [C] Name of the connector
in Settings > Security > Users.
Just go to the user you have just created and add it to the Connectors
group.
Then just get the token of the user displayed in the interface.
+ +You can either directly run the Docker image of connectors or add them to your current docker-compose.yml
file.
For instance, to enable the MISP connector, you can add a new service to your docker-compose.yml
file:
connector-misp:
+ image: opencti/connector-misp:latest
+ environment:
+ - OPENCTI_URL=http://localhost
+ - OPENCTI_TOKEN=ChangeMe
+ - CONNECTOR_ID=ChangeMe
+ - CONNECTOR_TYPE=EXTERNAL_IMPORT
+ - CONNECTOR_NAME=MISP
+ - CONNECTOR_SCOPE=misp
+ - CONNECTOR_CONFIDENCE_LEVEL=3
+ - CONNECTOR_UPDATE_EXISTING_DATA=false
+ - CONNECTOR_LOG_LEVEL=info
+ - MISP_URL=http://localhost # Required
+ - MISP_KEY=ChangeMe # Required
+ - MISP_SSL_VERIFY=False # Required
+ - MISP_CREATE_REPORTS=True # Required, create report for MISP event
+ - MISP_REPORT_CLASS=MISP event # Optional, report_class if creating report for event
+ - MISP_IMPORT_FROM_DATE=2000-01-01 # Optional, import all event from this date
+ - MISP_IMPORT_TAGS=opencti:import,type:osint # Optional, list of tags used for import events
+ - MISP_INTERVAL=1 # Required, in minutes
+ restart: always
+
To launch standalone connector, you can use the docker-compose.yml
file of the connector itself. Just download the latest release and start the connector:
$ wget https://github.com/OpenCTI-Platform/connectors/archive/{RELEASE_VERSION}.zip
+$ unzip {RELEASE_VERSION}.zip
+$ cd connectors-{RELEASE_VERSION}/misp/
+
Change the configuration in the docker-compose.yml
according to the parameters of the platform and of the targeted service. Then launch the connector:
If you want to manually launch connector, you just have to install Python 3 and pip3 for dependencies:
+ +Download the release of the connectors:
+$ wget <https://github.com/OpenCTI-Platform/connectors/archive/{RELEASE_VERSION}.zip>
+$ unzip {RELEASE_VERSION}.zip
+$ cd connectors-{RELEASE_VERSION}/misp/src/
+
Install dependencies and initialize the configuration:
+ +Change the config.yml
content according to the parameters of the platform and of the targeted service and launch the connector:
The connector status can be displayed in the dedicated section of the platform available in Data > Connectors. You will be able to see the statistics of the RabbitMQ queue of the connector:
+ +Problem
+If you encounter problems deploying OpenCTI or connectors, you can consult the troubleshooting page page.
+All components of OpenCTI are shipped both as Docker images and manual installation packages.
+Production deployment
+For production deployment, we recommend to deploy all components in containers, including dependencies, using native cloud services or orchestration systems such as Kubernetes.
+To have more details about deploying OpenCTI and its dependencies in cluster mode, please read the dedicated section.
+Use Docker
+Deploy OpenCTI using Docker and the default docker-compose.yml
provided
+in the docker.
Manual installation
+Deploy dependencies and launch the platform manually using the packages +released in the GitHub releases.
+ +OpenCTI can be deployed using the docker-compose command.
+Linux
+ +Windows and MacOS
+Just download the appropriate Docker for Desktop version for your operating system.
+Docker helpers are available in the Docker GitHub repository.
+$ mkdir -p /path/to/your/app && cd /path/to/your/app
+$ git clone https://github.com/OpenCTI-Platform/docker.git
+$ cd docker
+
Before running the docker-compose
command, the docker-compose.yml
file should be configured. By default, the docker-compose.yml
file is using environment variables available in the file .env.sample
.
You can either rename the file .env.sample
in .env
and put the expected values or just fill directly the docker-compose.yml
with the values corresponding to your environment.
Configuration static parameters
+The complete list of available static parameters is available in the configuration section.
+Here is an example to quickly generate the .env
file under Linux, especially all the default UUIDv4:
$ sudo apt install -y jq
+$ cd ~/docker
+$ (cat << EOF
+OPENCTI_ADMIN_EMAIL=admin@opencti.io
+OPENCTI_ADMIN_PASSWORD=ChangeMePlease
+OPENCTI_ADMIN_TOKEN=$(cat /proc/sys/kernel/random/uuid)
+MINIO_ROOT_USER=$(cat /proc/sys/kernel/random/uuid)
+MINIO_ROOT_PASSWORD=$(cat /proc/sys/kernel/random/uuid)
+RABBITMQ_DEFAULT_USER=guest
+RABBITMQ_DEFAULT_PASS=guest
+ELASTIC_MEMORY_SIZE=4G
+CONNECTOR_HISTORY_ID=$(cat /proc/sys/kernel/random/uuid)
+CONNECTOR_EXPORT_FILE_STIX_ID=$(cat /proc/sys/kernel/random/uuid)
+CONNECTOR_EXPORT_FILE_CSV_ID=$(cat /proc/sys/kernel/random/uuid)
+CONNECTOR_IMPORT_FILE_STIX_ID=$(cat /proc/sys/kernel/random/uuid)
+CONNECTOR_IMPORT_REPORT_ID=$(cat /proc/sys/kernel/random/uuid)
+EOF
+) > .env
+
If your docker-compose
deployment does not support .env
files, just export all environment variables before launching the platform:
²
+As OpenCTI has a dependency on ElasticSearch, you have to set the vm.max_map_count
before running the containers, as mentioned in the ElasticSearch documentation.
To make this parameter persistent, add the following to the end of your /etc/sysctl.conf
:
The default for OpenCTI data is to be persistent.
+In the docker-compose.yml
, you will find at the end the list of necessary persitent volumes for the dependencies:
volumes:
+ esdata: # ElasticSearch data
+ s3data: # S3 bucket data
+ redisdata: # Redis data
+ amqpdata: # RabbitMQ data
+
After changing your .env
file run docker-compose
in detached (-d) mode:
In order to have the best experience with Docker, we recommend using the Docker stack feature. In this mode you will have the capacity to easily scale your deployment.
+ +Put your environment variables in /etc/environment
:
# If you already exported your variables to .env from above:
+$ sudo cat .env >> /etc/environment
+$ sudo bash -c 'cat .env >> /etc/environment’
+$ sudo docker stack deploy --compose-file docker-compose.yml opencti
+
Installation done
+You can now go to http://localhost:8080 and log in with the credentials configured in your environment variables.
+You have to install all the needed dependencies for the main application and the workers. The example below is for Debian-based systems:
+ +First, you have to download and extract the latest release file. Then select the version to install depending of your operating system:
+For Linux:
+opencti-release_{RELEASE_VERSION}.tar.gz
version.opencti-release-{RELEASE_VERSION}_musl.tar.gz
version.For Windows:
+We don't provide any Windows release for now. However it is still possible to check the code out, manually install the dependencies and build the software.
+$ mkdir /path/to/your/app && cd /path/to/your/app
+$ wget <https://github.com/OpenCTI-Platform/opencti/releases/download/{RELEASE_VERSION}/opencti-release-{RELEASE_VERSION}.tar.gz>
+$ tar xvfz opencti-release-{RELEASE_VERSION}.tar.gz
+
The main application has just one JSON configuration file to change and a few Python modules to install
+ +Change the config/production.json file according to your configuration of ElasticSearch, Redis, RabbitMQ and S3 bucket as well as default credentials (the ADMIN_TOKEN
must be a valid UUID).
The application is just a NodeJS process, the creation of the database schema and the migration will be done at starting.
+ +The default username and password are those you have put in the config/production.json
file.
The OpenCTI worker is used to write the data coming from the RabbitMQ messages broker.
+Change the config.yml file according to your OpenCTI token.
+Installation done
+You can now go to http://localhost:4000 and log in with the credentials configured in your production.json
file.
Multi-clouds Terraform scripts
+This repository is here to provide you with a quick and easy way to deploy an OpenCTI instance in the cloud (AWS, Azure, or GCP).
+ +AWS Advanced Terraform scripts
+A Terraform deployment of OpenCTI designed to make use of native AWS Resources (where feasible). This includes AWS ECS Fargate, AWS OpenSearch, etc.
+ +Kubernetes Helm Charts
+OpenCTI Helm Charts (may be out of date) for Kubernetes with a global configuration file.
+ +If you want to use OpenCTI behind a reverse proxy with a context path, like https://domain.com/opencti
, please change the base_path
static parameter.
APP__BASE_PATH=/opencti
By default OpenCTI use websockets so don't forget to configure your proxy for this usage, an example with Nginx
:
location / {
+ proxy_cache off;
+ proxy_buffering off;
+ proxy_http_version 1.1;
+ proxy_set_header Upgrade $http_upgrade;
+ proxy_set_header Connection "upgrade";
+ proxy_set_header Host $host;
+ chunked_transfer_encoding off;
+ proxy_pass http://YOUR_UPSTREAM_BACKEND;
+ }
+
OpenCTI platform is based on a NodeJS runtime, with a memory limit of 8GB by default. If you encounter OutOfMemory
exceptions, this limit could be changed:
OpenCTI workers and connectors are Python processes. If you want to limit the memory of the process, we recommend to directly use Docker to do that. You can find more information in the official Docker documentation.
+ElasticSearch is also a JAVA process. In order to setup the JAVA memory allocation, you can use the environment variable ES_JAVA_OPTS
. You can find more information in the official ElasticSearch documentation.
Redis has a very small footprint on keys but will consume memory for the stream. By default the size of the stream is limited to 2 millions which represents a memory footprint around 8 GB
. You can find more information in the Redis docker hub.
MinIO is a small process and does not require a high amount of memory. More information are available for Linux here on the Kernel tuning guide.
+The RabbitMQ memory configuration can be find in the RabbitMQ official documentation. RabbitMQ will consumed memory until a specific threshold, therefore it should be configure along with the Docker memory limitation.
+ + + + + + + + + + + + + + + + + + +OpenCTI supports multiple ways to integrate with other systems which do not have native connectors or plugins to the platform. Here are the technical features available to ease the connection and the integration of the platform with other applications.
+Connectors list
+If you are looking to the list of OpenCTI connectors or native integration, please check the OpenCTI Ecosystem.
+To ease integrations with other products, OpenCTI has built-in capabilities to deliver the data to third-parties.
+It is possible to create as many CSV feeds as needed, based on filters and accessible in HTTP. CSV feeds are available in Data > Data sharing > Feeds (CSV).
+When creating a CSV feed, you need to select one or multiple types of entity to make available. For all columns available in the CSV, you've to select which field will be used for each type of entity:
+ +Details
+For more information about CSV feeds, filters and configuration, please check the Export in structured format section.
+Most of the moden cybersecurity systems such as SIEMs, EDRs, XDRs and even firewalls supports the TAXII protocol which is basically a paginated HTTP STIX feed. OpenCTI implements a TAXII 2.1 server with the ability to create as many TAXII collections as needed in Data > Data sharing > TAXII Collections?
+TAXII collections are a sub-selection of the knowledge available in the platform and relie on filters. For instance, it is possible to create TAXII collections for pieces of malware with a given label, for indicators with a score greater than n, etc.
+ +After implementing CSV feeds and TAXII collections, we figured out that those 2 stateless APIs are definitely not enough when it comes to tackle advanced information sharing challenges such as:
+Live streams are available in Data > Data sharing > Live streams. As TAXII collections, it is possible to create as many streams as needed using filters.
+ +Streams implement the HTTP SSE (Server-sent events) protocol and give applications to consume a real time pure STIX 2.1 stream. Stream connectors in the OpenCTI Ecosystem are using live streams to consume data and do something such as create / update / delete information in SIEMs, XDRs, etc.
+For all previously explained capabilities, as they are over the HTTP protocol, 3 authentication mechanisms are available to consume them.
+Using a bearer header with your OpenCTI API key
+ +API Key
+Your API key can be found in your profile available clicking on the top right icon.
+Using basic authentication
+ +Using client certificate authentication
+To know how to configure the client certificate authentication, please consult the authentication configuration section.
+To allow analysts and developers to implement more custom or complex use cases, a full GraphQL API is available in the application on the /graphql
endpoint.
The API can be queried using various GraphQL client such as Postman but you can leverage any HTTP client to forge GraphQL queries using POST
methods.
The API authentication can be performed using the token of a user and a classic Authorization header:
+ +The playground is available on the /graphql
endpoint. A link button is also available in the profile of your user.
All the schema documentation is directly available in the playground.
+ +If you already logged to OpenCTI with the same browser you should be able to directly do some requests. If you are not authenticated or want to authenticate only through the playground you can use a header configuration using your profile token
+Example of configuration (bottom left of the playground):
+ +Since not everyone is familiar with GraphQL APIs, we've developed a Python library to ease the interaction with it. The library is pretty easy to use. To initiate the client:
+# coding: utf-8
+
+from pycti import OpenCTIApiClient
+
+# Variables
+api_url = "http://opencti:4000"
+api_token = "bfa014e0-e02e-4aa6-a42b-603b19dcf159"
+
+# OpenCTI initialization
+opencti_api_client = OpenCTIApiClient(api_url, api_token)
+
Then just use the available helpers: +
# Search for malware with the keyword "windows"
+malwares = opencti_api_client.malware.list(search="windows")
+
+# Print
+print(malwares)
+
Details
+For more detailed information about the Python library, please read the dedicated section.
+Before starting the installation, let's discover how OpenCTI is working, which dependencies are needed and what are the minimal requirements to deploy it in production.
+The OpenCTI platform relies on several external databases and services in order to work.
+ +The platform is the central part of the OpenCTI technological stack. It allows users to access to the user interface but also provides the GraphQL API used by connectors and workers to insert data. In the context of a production deployment, you may need to scale horizontally and launch multiple platforms behind a load balancer connected to the same databases (ElasticSearch, Redis, S3, RabbitMQ).
+The workers are standalone Python processes consuming messages from the RabbitMQ broker in order to do asynchronous write queries. You can launch as many workers as you need to increase the write performances. At some point, the write performances will be limited by the throughput of the ElasticSearch database cluster.
+Number of workers
+If you need to increase performances, it is better to launch more platforms to handle worker queries. The recommended setup is to have at least one platform for 3 workers (ie. 9 workers distributed over 3 platforms).
+The connectors are third-party pieces of software (Python processes) that can play five different +roles on the platform:
+Type | +Description | +Examples | +
---|---|---|
EXTERNAL_IMPORT | +Pull data from remote sources, convert it to STIX2 and insert it on the OpenCTI platform. | +MITRE Datasets, MISP, CVE, AlienVault, Mandiant, etc. | +
INTERNAL_ENRICHMENT | +Listen for new OpenCTI entities or users requests, pull data from remote sources to enrich. | +Shodan, DomainTools, IpInfo, etc. | +
INTERNAL_IMPORT_FILE | +Extract data from files uploaded on OpenCTI trough the UI or the API. | +STIX 2.1, PDF, Text, HTML, etc. | +
INTERNAL_EXPORT_FILE | +Generate export from OpenCTI data, based on a single object or a list. | +STIX 2.1, CSV, PDF, etc. | +
STREAM | +Consume a platform data stream an do something with events. | +Splunk, Elastic Security, Q-Radar, etc. | +
List of connectors
+You can find all currently available connector in the OpenCTI Ecosystem.
+Component | +Version | +CPU | +RAM | +Disk type | +Disk space | +
---|---|---|---|---|---|
ElasticSearch / OpenSearch | +≥ 8.0 / ≥ 2.9 | +2 cores | +≥ 8GB | +SSD | +≥ 16GB | +
Redis | +≥ 7.1 | +1 core | +≥ 1GB | +SSD | +≥ 16GB | +
RabbitMQ | +≥ 3.11 | +1 core | +≥ 512MB | +Standard | +≥ 2GB | +
S3 / MinIO | +≥ RELEASE.2023-02 | +1 core | +≥ 128MB | +SSD | +≥ 16GB | +
Component | +CPU | +RAM | +Disk type | +Disk space | +
---|---|---|---|---|
OpenCTI Core | +2 cores | +≥ 8GB | +None (stateless) | +- | +
Worker(s) | +1 core | +≥ 128MB | +None (stateless) | +- | +
Connector(s) | +1 core | +≥ 128MB | +None (stateless) | +- | +
Clustering
+To have more details about deploying OpenCTI and its dependencies in cluster mode, please read the dedicated section.
+OpenCTI is an open and modular platform. A lot of connectors, plugins and clients are created by Filigran and community. You can find here other resources available to complete your OpenCTI journey.
+Verticalized threat landcapes
+Access to monthly sectorial analysis from our experts team based on knowledge and +data collected by our partners.
+ +Case studies
+Explore the Filigran case studies about stories and usages of the platform +among our communities and customers.
+ +Default rollover policies
+Since OpenCTI 5.9.0, rollover policies are automatically created when the platform is initialized for the first time. If your platform has been initialized using an older version of OpenCTI or if you would like to understand (and customize) rollover policies please read the following documentation.
+ElasticSearch and OpenSearch both support rollover on indices. OpenCTI has been designed to be able to use aliases for indices and so support very well index lifeycle policies. Thus, by default OpenCTI initialized indices with a suffix -00001
and use wildcard to query indices. When rollover policies are implemented (default starting OCTI 5.9.X if you initialized your platform at this version), indices are splitted to keep a reasonable volume of data in shards.
By default, a rollover policy is applied on all indices used by OpenCTI.
+opencti_history
opencti_inferred_entities
opencti_inferred_relationships
opencti_internal_objects
opencti_internal_relationships
opencti_stix_core_relationships
opencti_stix_cyber_observable_relationships
opencti_stix_cyber_observables
opencti_stix_domain_objects
opencti_stix_meta_objects
opencti_stix_meta_relationships
opencti_stix_sighting_relationships
For your information, the indices which can grow rapidly are:
+opencti_stix_meta_relationships
: it contains all the nested relationships between objects and labels / marking definitions / external references / authors, etc.opencti_history
: it contains the history log of all objects in the platform.opencti_stix_cyber_observables
: it contains all observables stored in the platform.opencti_stix_core_relationships
: it contains all main STIX relationships stored in the platform.Here is the recommended policy (initialized starting 5.9.X):
+50 GB
365 days
75,000,000
Procedure information
+Please read the following only if your platform has been initialized before 5.9.0, otherwise lifecycle policies has been created (but you can still cutomize them).
+Unfortunately, to be able to implement rollover policies on ElasticSearch / OpenSearch indices, it will be needed to re-index all the data in new indices using ElasticSearch capabilities.
+First step is to shutdown your OpenCTI platform.
+Then, in the OpenCTI configuration, change the ElasticSearch / OpenSearch default prefix to octi
(default is opencti
).
Create a rollover policy named octi-ilm-policy
(in Kibana, Management > Index Lifecycle Policies
):
50 GB
365 days
75,000,000
In Kibana, clone the opencti-index-template
to have one index template by OpenCTI index with the appropriate rollover policy, index pattern and rollover alias (in Kibana, Management > Index Management > Index Templates
).
Create the following index templates:
+octi_history
octi_inferred_entities
octi_inferred_relationships
octi_internal_objects
octi_internal_relationships
octi_stix_core_relationships
octi_stix_cyber_observable_relationships
octi_stix_cyber_observables
octi_stix_domain_objects
octi_stix_meta_objects
octi_stix_meta_relationships
octi_stix_sighting_relationships
Here is the overview of all templates (you should have something with octi_
instead of opencti_
).
Then, going back in the index lifecycle policies screen, you can click on the "+" button of the octi-ilm-policy
to Add the policy to index template
, then add the policy to add previously created template with the proper "Alias for rollover index".
Before we can re-index, we need to create the new indices with aliases.
+ +Repeat this step for all indices:
+octi_history
octi_inferred_entities
octi_inferred_relationships
octi_internal_objects
octi_internal_relationships
octi_stix_core_relationships
octi_stix_cyber_observable_relationships
octi_stix_cyber_observables
octi_stix_domain_objects
octi_stix_meta_objects
octi_stix_meta_relationships
Using the reindex
API, re-index all indices one by one:
curl -X POST "localhost:9200/_reindex?pretty" -H 'Content-Type: application/json' -d'
+{
+ "source": {
+ "index": "opencti_history-000001"
+ },
+ "dest": {
+ "index": "octi_history"
+ }
+}
+'
+
You will see the rollover policy to be applied and the new indices are automatically rolled-over during reindexation.
+Then just delete all indices with the prefix opencti_
.
Start your platform, using the new indices.
+Rollover documentation
+To have more details about automatic rollover and lifecycle policies, please read the official ElasticSearch documentation.
+This page aims to explains the typical errors you can have with your OpenCTI platform.
+It is highly recommended to monitor the error logs of the platforms, workers and connectors. All the components have log outputs in an understandable JSON format. It necessary, it is always possible to increase the log level. In production, it is recommended to have the log level set to error
.
Here are some useful parameters for platform logging:
+- APP__APP_LOGS__LOGS_LEVEL=[error|warning|info|debug]
+- APP__APP_LOGS__LOGS_CONSOLE=true # Output in the container console
+
All connectors support the same set of parameters to manage the log level and outputs:
+- OPENCTI_JSON_LOGGING=true # Enable / disable JSON logging
+- CONNECTOR_LOG_LEVEL=info=[error|warning|info|debug]
+
The workers can have more or less verbose outputs:
+- OPENCTI_JSON_LOGGING=true # Enable / disable JSON logging
+- WORKER_LOG_LEVEL=[error|warning|info|debug]
+
Missing reference to handle creation
+After 5 retries, if an element required to create another element is missing, the platform raises an exception. It usually comes from a connector that generates inconsistent STIX 2.1 bundles.
+Cant upsert entity. Too many entities resolved
+OpenCTI received an entity which is matching too many other entities in the platform. In this condition we cannot take a decision. We need to dig into the data bundle to identify why he match too much entities and fix the data in the bundle / or the platform according to what you expect.
+Execution timeout, too many concurrent call on the same entities
+The platform supports multi workers and multiple parallel creation but different parameters can lead to some locking timeout in the execution.
+If you have this kind of error, limit the number of workers deployed. Try to find the right balance of the number of workers, connectors and elasticsearch sizing.
+Indicator of type yara is not correctly formatted
+OpenCTI check the validity of the indicator rule.
+Observable of type IPv4-Addr is not correctly formatted
+OpenCTI check the validity of the oversable value.
+TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark...
+Disk full, no space left on the device for ElasticSearch.
+Depending on your installation mode, upgrade path may change.
+Migrations
+The platform is taking care of all necessary underlying migrations in the databases if any, you can upgrade OpenCTI from any version to the latest one, including skipping multiple major releases.
+Before applying this procedure, please update your docker-compose.yml
file with the new version number of container images.
For each of services, you have to run the following command:
+ +When upgrading the platform, you have to replace all files and restart the platform, the database migrations will be done automatically:
+ + + + + + + + + + + + + + + + + + + +Under construction
+We are doing our best to complete this page. +If you want to participate, don't hesitate to join the Filigran Community on Slack +or submit your pull request on the Github doc repository.
+A connector in OpenCTI is a service that runs next to the platform and can be implemented in almost any programming language that has STIX2 support. Connectors are used to extend the functionality of OpenCTI and allow operators to shift some of the processing workload to external services. To use the conveniently provided OpenCTI connector SDK you need to use Python3 at the moment.
+We choose to have a very decentralized approach on connectors, in order to bring a maximum freedom to developers and vendors. So a connector on OpenCTI can be defined by a standalone Python 3 process that pushes an understandable format of data to an ingestion queue of messages.
+Each connector must implement a long-running process that can be launched just by executing the main Python file. The only mandatory dependency is the OpenCTIConnectorHelper
class that enables the connector to send data to OpenCTI.
In the beginning first think about your use-case to choose and appropriate connector type - what do want to achieve with your connector? The following table gives you an overview of the current connector types and some typical use-cases:
+Connector types
+Type | +Typical use cases | +Example connector | +
---|---|---|
EXTERNAL_IMPORT | +Integrate external TI provider, Integrate external TI platform | +AlienVault | +
INTERNAL_ENRICHMENT | +Enhance existing data with additional knowledge | +AbuseIP | +
INTERNAL_IMPORT_FILE | +(Bulk) import knowledge from files | +Import document | +
INTERNAL_EXPORT_FILE | +(Bulk) export knowledge to files | +STIX 2.1, CSV. | +
STREAM | +Integrate external TI provider, Integrate external TI platform | +Elastic Security | +
After you've selected your connector type make yourself familiar with STIX2 and the supported relationships in OpenCTI. Having some knowledge about the internal data models with help you a lot with the implementation of your idea.
+To develop and test your connector, you need a running OpenCTI instance with the frontend and the messaging broker accessible. If you don't plan on developing anything for the OpenCTI platform or the frontend, the easiest setup for the connector development is using the docker setup, For more details see here.
+To give you an easy starting point we prepared an example connector in the public repository you can use as template to bootstrap your development.
+Some prerequisites we recommend to follow this tutorial:
+In the terminal check out the connectors repository and copy the template connector to $myconnector
(replace it with your name throughout the following text examples).
$ pip3 install black flake8 pycti
+# Fork the current repository, then clone your fork
+$ git clone https://github.com/YOUR-USERNAME/connectors.git
+$ cd connectors
+$ git remote add upstream https://github.com/OpenCTI-Platform/connectors.git
+# Create a branch for your feature/fix
+$ git checkout -b [branch-name]
+$ cp -r template $connector_type/$myconnector
+$ cd $connector_type/$myconnector
+$ tree .
+.
+├── docker-compose.yml
+├── Dockerfile
+├── entrypoint.sh
+├── README.md
+└── src
+ ├── config.yml.sample
+ ├── main.py
+ └── requirements.txt
+
+1 directory, 7 files
+
There are a few files in the template we need to change for our connector to be unique. You can check for all places you need to change you connector name with the following command (the output will look similar):
+$ grep -Ri template .
+
+README.md:# OpenCTI Template Connector
+README.md:| `connector_type` | `CONNECTOR_TYPE` | Yes | Must be `Template_Type` (this is the connector type). |
+README.md:| `connector_name` | `CONNECTOR_NAME` | Yes | Option `Template` |
+README.md:| `connector_scope` | `CONNECTOR_SCOPE` | Yes | Supported scope: Template Scope (MIME Type or Stix Object) |
+README.md:| `template_attribute` | `TEMPLATE_ATTRIBUTE` | Yes | Additional setting for the connector itself |
+docker-compose.yml: connector-template:
+docker-compose.yml: image: opencti/connector-template:4.5.5
+docker-compose.yml: - CONNECTOR_TYPE=Template_Type
+docker-compose.yml: - CONNECTOR_NAME=Template
+docker-compose.yml: - CONNECTOR_SCOPE=Template_Scope # MIME type or Stix Object
+entrypoint.sh:cd /opt/opencti-connector-template
+Dockerfile:COPY src /opt/opencti-template
+Dockerfile: cd /opt/opencti-connector-template && \
+src/main.py:class Template:
+src/main.py: "TEMPLATE_ATTRIBUTE", ["template", "attribute"], config, True
+src/main.py: connectorTemplate = Template()
+src/main.py: connectorTemplate.run()
+src/config.yml.sample: type: 'Template_Type'
+src/config.yml.sample: name: 'Template'
+src/config.yml.sample: scope: 'Template_Scope' # MIME type or SCO
+
Required changes:
+Template
or template
mentions to your connector name e.g. ImportCsv
or importcsv
TEMPLATE
mentions to your connector name e.g. IMPORTCSV
Template_Scope
mentions to the required scope of your connector. For processing imported files, that can be the Mime type e.g. application/pdf
or for enriching existing information in OpenCTI, define the STIX object's name e.g. Report
. Multiple scopes can be separated by a simple ,
Template_Type
to the connector type you wish to develop. The OpenCTI types (OpenCTI flags) are defined in this table.After getting the configuration parameters of your connector, you have to initialize the OpenCTI connector helper by using the pycti
Python library. This is shown in the following example:
class TemplateConnector:
+ def __init__(self):
+ # Instantiate the connector helper from config
+ config_file_path = os.path.dirname(os.path.abspath(__file__)) + "/config.yml"
+ config = (
+ yaml.load(open(config_file_path), Loader=yaml.SafeLoader)
+ if os.path.isfile(config_file_path)
+ else {}
+ )
+ self.helper = OpenCTIConnectorHelper(config)
+ self.custom_attribute = get_config_variable(
+ "TEMPLATE_ATTRIBUTE", ["template", "attribute"], config
+ )
+
Since there are some basic differences in the tasks of the different connector classes, the structure is also a bit class dependent. While the external-import and the stream connector run independently in a regular interval or constantly, the other 3 connector classes only run when being requested by the OpenCTI platform.
+The self-triggered connectors run independently, but the OpenCTI need to define a callback function, which can be executed for the connector to start its work. This is done via self.helper.listen(self._process_message)
. In the appended examples, the difference of the setup can be seen.
Self-triggered Connectors
+OpenCTI triggered
+from pycti import OpenCTIConnectorHelper, get_config_variable
+
+class TemplateConnector:
+ def __init__(self) -> None:
+ # Initialization procedures
+ [...]
+ self.template_interval = get_config_variable(
+ "TEMPLATE_INTERVAL", ["template", "interval"], config, True
+ )
+
+ def get_interval(self) -> int:
+ return int(self.template_interval) * 60 * 60 * 24
+
+ def run(self) -> None:
+ # Main procedure
+
+if __name__ == "__main__":
+ try:
+ template_connector = TemplateConnector()
+ template_connector.run()
+ except Exception as e:
+ print(e)
+ time.sleep(10)
+ exit(0)
+
from pycti import OpenCTIConnectorHelper, get_config_variable
+
+class TemplateConnector:
+ def __init__(self) -> None:
+ # Initialization procedures
+ [...]
+
+ def _process_message(self, data: dict) -> str:
+ # Main procedure
+
+ # Start the main loop
+ def start(self) -> None:
+ self.helper.listen(self._process_message)
+
+if __name__ == "__main__":
+ try:
+ template_connector = TemplateConnector()
+ template_connector.start()
+ except Exception as e:
+ print(e)
+ time.sleep(10)
+ exit(0)
+
When using the OpenCTIConnectorHelper
class, there are two way for reading from or writing data to the OpenCTI platform.
self.helper.api
self.send_stix2_bundle
The recommended way for creating or updating data in the OpenCTI platform is via the OpenCTI worker. This enables the connector to just send and forget about thousands of entities at once to without having to think about the ingestion order, performance or error handling.
+ + +The OpenCTI connector helper method send_stix2_bundle
must be used to send data to OpenCTI. The send_stix2_bundle
function takes 2 arguments.
string
(mandatory)list
of entities types that should be ingested (optional)Here is an example using the STIX2 Python library:
+from stix2 import Bundle, AttackPattern
+
+[...]
+
+attack_pattern = AttackPattern(name='Evil Pattern')
+
+bundle_objects = []
+bundle_objects.append(attack_pattern)
+
+bundle = Bundle(objects=bundle_objects).serialize()
+bundles_sent = self.opencti_connector_helper.send_stix2_bundle(bundle)
+
Read queries to the OpenCTI platform can be achieved using the API and the STIX IDs can be attached to reports to create the relationship between those two entities.
+ +If you want to add the found entity via objects_refs
to another SDO, simple add a list of stix_ids
to the SDO. Here's an example using the entity from the code snippet above:
from stix2 import Report
+
+[...]
+
+report = Report(
+ id=report["standard_id"],
+ object_refs=[entity["standard_id"]],
+)
+
When something crashes at a user's, you as a developer want to know as much as possible about this incident to easily improve your code and remove this issue. To do so, it is very helpful if your connector documents what it does. Use info
messages for big changes like the beginning or the finishing of an operation, but to facilitate your bug removal attempts, implement debug
messages for minor operation changes to document different steps in your code.
When encountering a crash, the connector's user can easily restart the troubling connector with the debug logging activated.
+CONNECTOR_LOG_LEVEL=debug
Using those additional log messages, the bug report is more enriched with information about the possible cause of the problem. Here's an example of how the logging should be implemented:
+ def run(self) -> None:
+ self.helper.log_info('Template connector starts')
+ results = self._ask_for_news()
+ [...]
+
+ def _ask_for_news() -> None:
+ overall = []
+ for i in range(0, 10):
+ self.log_debug(f"Asking about news with count '{i}'")
+ # Do something
+ self.log_debug(f"Resut: '{result}'")
+ overall.append(result)
+ return overall
+
Please make sure that the debug messages rich of useful information, but that they are not redundant and that the user is not drowned by unnecessary information.
+If you are still unsure about how to implement certain things in your connector, we advise you to have a look at the code of other connectors of the same type. Maybe they are already using approach which is suitable for addressing to your problem.
+OpenCTI sends the connector a few instructions via the data
dictionary in the callback function. Depending on the connector type, the data dictionary content is a bit different. Here are a few examples for each connector type.
Internal Import Connector
+Internal Enrichment Connector
+{
+ "file_id": "<fileId>",
+ "file_mime": "application/pdf",
+ "file_fetch": "storage/get/<file_id>", // Path to get the file
+ "entity_id": "report--82843863-6301-59da-b783-fe98249b464e", // Context of the upload
+}
+
Internal Export Connector
+{
+ "export_scope": "single", // 'single' or 'list'
+ "export_type": "simple", // 'simple' or 'full'
+ "file_name": "<fileName>", // Export expected file name
+ "max_marking": "<maxMarkingId>", // Max marking id
+ "entity_type": "AttackPattern", // Exported entity type
+ // ONLY for single entity export
+ "entity_id": "<entity.id>", // Exported element
+ // ONLY for list entity export
+ "list_params": "[<parameters>]" // Parameters for finding entities
+}
+
For self-triggered connectors, OpenCTI has to be told about new jobs to process and to import. This is done by registering a so called work
before sending the stix bundle and signalling the end of a work. Here an example:
By implementing the work registration, they will show up as shown in this screenshot for the MITRE ATT&CK connector:
+def run() -> None:
+ # Anounce upcoming work
+ timestamp = int(time.time())
+ now = datetime.utcfromtimestamp(timestamp)
+ friendly_name = "Template run @ " + now.strftime("%Y-%m-%d %H:%M:%S")
+ work_id = self.helper.api.work.initiate_work(
+ self.helper.connect_id, friendly_name
+ )
+
+ [...]
+ # Send Stix bundle
+ self.helper.send_stix2_bundle(
+ bundle,
+ entities_types=self.helper.connect_scope,
+ update=True,
+ work_id=work_id,
+ )
+ # Finish the work
+ self.helper.log_info(
+ f"Connector successfully run, storing last_run as {str(timestamp)}"
+ )
+ message = "Last_run stored, next run in: {str(round(self.get_interval() / 60 / 60 / 24, 2))} days"
+ self.helper.api.work.to_processed(work_id, message)
+
The connector is also responsible for making sure that it runs in certain intervals. In most cases, the intervals are definable in the connector config and then only need to be set and updated during the runtime.
+class TemplateConnector:
+ def __init__(self) -> None:
+ # Initialization procedures
+ [...]
+ self.template_interval = get_config_variable(
+ "TEMPLATE_INTERVAL", ["template", "interval"], config, True
+ )
+
+ def get_interval(self) -> int:
+ return int(self.template_interval) * 60 * 60 * 24
+
+ def run(self) -> None:
+ self.helper.log_info("Fetching knowledge...")
+ while True:
+ try:
+ # Get the current timestamp and check
+ timestamp = int(time.time())
+ current_state = self.helper.get_state()
+ if current_state is not None and "last_run" in current_state:
+ last_run = current_state["last_run"]
+ self.helper.log_info(
+ "Connector last run: "
+ + datetime.utcfromtimestamp(last_run).strftime(
+ "%Y-%m-%d %H:%M:%S"
+ )
+ )
+ else:
+ last_run = None
+ self.helper.log_info("Connector has never run")
+ # If the last_run is more than interval-1 day
+ if last_run is None or (
+ (timestamp - last_run)
+ > ((int(self.template_interval) - 1) * 60 * 60 * 24)
+ ):
+ timestamp = int(time.time())
+ now = datetime.utcfromtimestamp(timestamp)
+ friendly_name = "Connector run @ " + now.strftime("%Y-%m-%d %H:%M:%S")
+
+ ###
+ # RUN CODE HERE
+ ###
+
+ # Store the current timestamp as a last run
+ self.helper.log_info(
+ "Connector successfully run, storing last_run as "
+ + str(timestamp)
+ )
+ self.helper.set_state({"last_run": timestamp})
+ message = (
+ "Last_run stored, next run in: "
+ + str(round(self.get_interval() / 60 / 60 / 24, 2))
+ + " days"
+ )
+ self.helper.api.work.to_processed(work_id, message)
+ self.helper.log_info(message)
+ time.sleep(60)
+ else:
+ new_interval = self.get_interval() - (timestamp - last_run)
+ self.helper.log_info(
+ "Connector will not run, next run in: "
+ + str(round(new_interval / 60 / 60 / 24, 2))
+ + " days"
+ )
+ time.sleep(60)
+
For development purposes, it is easier to simply run the python script locally until everything works as it sould.
+$ virtualenv env
+$ source ./env/bin/activate
+$ pip3 install -r requirements
+$ cp config.yml.sample config.yml
+# Define the opencti url and token, as well as the connector's id
+$ vim config.yml
+$ python3 main.py
+INFO:root:Listing Threat-Actors with filters null.
+INFO:root:Connector registered with ID: a2de809c-fbb9-491d-90c0-96c7d1766000
+INFO:root:Starting ping alive thread
+...
+
Before submitting a Pull Request, please test your code for different use cases and scenarios. We don't have an automatic testing suite for the connectors yet, thus we highly depend on developers thinking about creative scenarios their code could encounter.
+If you plan to provide your connector to be used by the community (❤️) your code should pass the following (minimum) criteria.
+# Linting with flake8 contains no errors or warnings
+$ flake8 --ignore=E,W
+# Verify formatting with black
+$ black .
+All done! ✨ 🍰 ✨
+1 file left unchanged.
+# Verify import sorting
+$ isort --profile black .
+Fixing /path/to/connector/file.py
+# Push you feature/fix on Github
+$ git add [file(s)]
+$ git commit -m "[connector_name] descriptive message"
+$ git push origin [branch-name]
+# Open a pull request with the title "[connector_name] message"
+
If you have any trouble with this just reach out to the OpenCTI core team. We are happy to assist with this.
+ + + + + + + + + + + + + + + + + + +Development stack require some base software that need to be installed.
+Platform dependencies in development are deployed through container management, so you need to install a container stack.
+We currently support docker and postman.
+ +As OpenCTI has a dependency to ElasticSearch, you have to set the vm.max_map_count before running the containers, as mentioned in the ElasticSearch documentation.
+ +The platform is developed on nodejs technology, so you need to install node and the yarn package manager.
+$ sudo apt-get install nodejs
+$ sudo curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | sudo apt-key add -
+$ sudo echo "deb https://dl.yarnpkg.com/debian/ stable main" | sudo tee /etc/apt/sources.list.d/yarn.list
+$ sudo apt-get update && sudo apt-get install yarn
+
For worker and connectors, a python runtime is needed.
+ +Development stack require some base software that need to be installed.
+Platform dependencies in development are deployed through container management, so you need to install a container stack.
+We currently support docker and postman.
+Docker Desktop from - https://docs.docker.com/desktop/install/windows-install/
+wsl --set-default-version 2
The platform is developed on nodejs technology, so you need to install node and the yarn package manager.
+Shell out to CMD prompt as Administrator and install/run:
+pip3 install pywin32
Configure Yarn (https://yarnpkg.com/getting-started/install)
+corepack enable
For worker and connectors, a python runtime is needed. Even if you already have a python runtime installed through node installation, +on windows some nodejs package will be recompiled with python and C++ runtime.
+For this reason Visual Studio Build Tools is required.
+Just use defaults on each screen
+Install your preferred IDE
+This summary should give you a detailed setup description for initiating the OpenCTI setup environment +necessary for developing on the OpenCTI platform, a client library or the connectors. +This page document how to set up an "All-in-One" development environment for OpenCTI. +The devenv will contain data of 3 different repositories:
+Contains the platform OpenCTI project code base:
+~/opencti/opencti-platform/opencti-dev
~/opencti/opencti-platform/opencti-graphql
~/opencti/opencti-platform/opencti-frontend
~/opencti/opencti-worker
Contains a lot of developed connectors, as a source of inspiration for your new connector.
+Contains the source code of the python library used in worker or connectors.
+Some tools are needed before starting to develop. Please check Ubuntu prerequisites or Windows prerequisites
+Fork and clone the git repositories
+In development dependencies are deployed trough containers.
+A development compose file is available in ~/opencti/opencti-platform/opencti-dev
You have now all the dependencies of OpenCTI running and waiting for product to run.
+The GraphQL API is developed in JS and with some python code. +As it's an "all-in-one" installation, the python environment will be installed in a virtual environment.
+cd ~/opencti/opencti-platform/opencti-graphql
+python3 -m venv .venv --prompt "graphql"
+source .venv/bin/activate
+pip install --upgrade pip wheel setuptools
+yarn install
+yarn install:python
+deactivate
+
The API can be specifically configured with files depending on the starting profile. +By default, the default.json file is used and will be correctly configured for local usage except for admin password
+So you need to create a development profile file. You can duplicate the default file and adapt if for you need. +
+At minimum adapt the admin part for the password and token. +
"admin": {
+ "email": "admin@opencti.io",
+ "password": "MyNewPassord",
+ "token": "UUID generated with https://www.uuidgenerator.net"
+ }
+
Before starting the backend you need to install the nodejs modules
+ +Then you can simply start the backend API with the yarn start command
+ +The platform will start logging some interesting information
+{"category":"APP","level":"info","message":"[OPENCTI] Starting platform","timestamp":"2023-07-02T16:37:10.984Z","version":"5.8.7"}
+{"category":"APP","level":"info","message":"[OPENCTI] Checking dependencies statuses","timestamp":"2023-07-02T16:37:10.987Z","version":"5.8.7"}
+{"category":"APP","level":"info","message":"[SEARCH] Elasticsearch (8.5.2) client selected / runtime sorting enabled","timestamp":"2023-07-02T16:37:11.014Z","version":"5.8.7"}
+{"category":"APP","level":"info","message":"[CHECK] Search engine is alive","timestamp":"2023-07-02T16:37:11.015Z","version":"5.8.7"}
+...
+{"category":"APP","level":"info","message":"[INIT] Platform initialization done","timestamp":"2023-07-02T16:37:11.622Z","version":"5.8.7"}
+{"category":"APP","level":"info","message":"[OPENCTI] API ready on port 4000","timestamp":"2023-07-02T16:37:12.382Z","version":"5.8.7"}
+
If you want to start on another profile you can use the -e parameter. +For example here to use the profile.json configuration file.
+ +Before pushing your code you need to validate the syntax and ensure the testing will be validated.
+yarn lint
yarn check-ts
For starting the test you will need to create a test.json file. +You can use the same dependencies by only adapting all prefix for all dependencies.
+yarn test:dev
Before starting the backend you need to install the nodejs modules
+ +Then you can simply start the frontend with the yarn start command
+ +The frontend will start with some interesting information
+[INFO] [default] compiling...
+[INFO] [default] compiled documents: 1592 reader, 1072 normalization, 1596 operation text
+[INFO] Compilation completed.
+[INFO] Done.
+[HPM] Proxy created: /stream -> http://localhost:4000
+[HPM] Proxy created: /storage -> http://localhost:4000
+[HPM] Proxy created: /taxii2 -> http://localhost:4000
+[HPM] Proxy created: /feeds -> http://localhost:4000
+[HPM] Proxy created: /graphql -> http://localhost:4000
+[HPM] Proxy created: /auth/** -> http://localhost:4000
+[HPM] Proxy created: /static/flags/** -> http://localhost:4000
+
The web UI should be accessible on http://127.0.0.1:3000
+Before pushing your code you need to validate the syntax and ensure the testing will be validated.
+yarn lint
yarn check-ts
yarn test
Running a worker is required when you want to develop on the ingestion or import/export connectors.
+cd ~/opencti/opencti-worker/src
+python3 -m venv .venv --prompt "worker"
+source .venv/bin/activate
+pip3 install --upgrade pip wheel setuptools
+pip3 install -r requirements.txt
+deactivate
+
For connectors development, please take a look to Connectors development dedicated page.
+Based on development source you can build the package for production. +This package will be minified and optimized with esbuild.
+ +After the build you can start the production build with yarn serv. +This build will use the production.json configuration file
+ + + + + + + + + + + + + + + + + + + +Under construction
+We are doing our best to complete this page. +If you want to participate, don't hesitate to join the Filigran Community on Slack +or submit your pull request on the Github doc repository.
+Welcome to the OpenCTI Documentation space. Here you will be able to find all documents, meeting notes and presentations about the platform.
+Release notes
+Please, be sure to also take a look at the OpenCTI releases notes, they may contain important information about releases and deployments.
+OpenCTI is an open source platform allowing organizations to manage their cyber threat intelligence knowledge and observables. It has been created in order to structure, store, organize and visualize technical and non-technical information about cyber threats.
+Deployment & Setup
+Learn how to deploy and configure the platform as well as +launch connectors to get the first data in OpenCTI.
+ +User Guide
+Understand how to use the platform, explore the knowledge, import +and export information, create dashboard, etc.
+ +Administration
+Know how to administrate OpenCTI, create users and groups using RBAC / +segregation, put retention policies and custom taxonomies.
+ +Need more help?
+We are doing our best to keep this documentation complete, accurate and up to date.
+If you still have questions or you find something which is not sufficiently explained, join the Filigran Community on Slack.
+All tutorials are published directly on the Medium blog, this section provides a comprehensive list of the most important ones.
+Introducing malware analysis: enhance your cybersecurity triage with OpenCTI
+ Jul 22, 2023
As a cybersecurity analyst, you understand the importance of quickly identifying and analyzing suspicious or malicious files, URLs, and network traffic...
+ +OpenCTI case management is ready for takeoff: what is available and what’s next
+ Jul 3, 2023
As part of our 2023 strategic roadmap, we’ve worked since January on the case management system within the OpenCTI platform. This initiative comes from 2 simple statements...
+ +Progressive rollout of the OpenCTI Enterprise Edition: why, what and how?
+ June 10, 2023
We are thrilled to announce that, from OpenCTI 5.8, Filigran is now providing some customers with an Enterprise Edition of the platform, whether on-premise...
+ +Below, you will find external resources which may be useful along your OpenCTI journey.
+ OpenCTI Ecosystem
+List of available connectors and integrations to expand platform usage.
Training Courses
+Training courses for analysts and administrators in the Filigran training center.
Performances tests & metrics
+Regular performance tests based on default configuration and datasets.
Under construction
+We are doing our best to complete this page. +If you want to participate, don't hesitate to join the Filigran Community on Slack +or submit your pull request on the Github doc repository.
+Under construction
+We are doing our best to complete this page. +If you want to participate, don't hesitate to join the Filigran Community on Slack +or submit your pull request on the Github doc repository.
+Under construction
+We are doing our best to complete this page. +If you want to participate, don't hesitate to join the Filigran Community on Slack +or submit your pull request on the Github doc repository.
+Under construction
+We are doing our best to complete this page. +If you want to participate, don't hesitate to join the Filigran Community on Slack +or submit your pull request on the Github doc repository.
+Under construction
+We are doing our best to complete this page. +If you want to participate, don't hesitate to join the Filigran Community on Slack +or submit your pull request on the Github doc repository.
+Under construction
+We are doing our best to complete this page. +If you want to participate, don't hesitate to join the Filigran Community on Slack +or submit your pull request on the Github doc repository.
+In order to provide a real time way to consume STIX CTI information, OpenCTI provides data events in a stream that can be consume to react on creation, update, deletion and merge. +This way of getting information out of OpenCTI is highly efficient and already use by some connectors.
+OpenCTI is currently using REDIS Stream (See https://redis.io/topics/streams-intro) as the technical layer. +Each time something is modified in the OpenCTI database, a specific event is added in the stream.
+In order to provides a really easy consuming protocol we decide to provide a SSE (https://fr.wikipedia.org/wiki/Server-sent_events) http URL linked to the standard login system of OpenCTI. +Any user with the correct access rights can open and access http://opencti_instance/stream and open an SSE connection to start receiving live events. You can of course consume directly the stream in Redis but you will have to manage access and rights directly.
+id: {Event stream id} -> Like 1620249512318-0
+event: {Event type} -> create / update / delete
+data: { -> The complete event data
+ version -> The version number of the event
+ type -> The inner type of the event
+ scope -> The scope of the event [internal or external]
+ data: {STIX data} -> The STIX representation of the data.
+ message -> A simple string to easy understand the event
+ origin: {Data Origin} -> Complex object with different information about the origin of the event
+ context: {Event context} -> Complex object with meta information depending of the event type
+}
+
Id can be used to consume the stream from this specific point.
+The current stix data representation is based on the STIX 2.1 format using extension mechanism. +Please take a look to https://docs.oasis-open.org/cti/stix/v2.1/stix-v2.1.html for more information.
+Its simply the data created in STIX format.
+Its simply the data in STIX format just before his deletion. +You will also find the automated deletions in context due to automatic dependency management.
+ +This event type publish the complete STIX data information along with patches information. +Thanks to the patches, its possible to rebuild the previous version and easily understand that happens in the update. +patch and reverse_patch follow the official jsonpatch specification. You can find more information at https://jsonpatch.com/
+{
+ "context": {
+ "patch": [/* patch operation object */],
+ "reverse_patch": [/* patch operation object */]
+ }
+}
+
Merge is a mix of an update of the merge targets and deletions of the sources. +In this event you will find the same patch and reverse_patch as an update and the list of elements merged into the target in the "sources" attribute.
+{
+ "context": {
+ "patch": [/* patch operation object */],
+ "reverse_patch": [/* patch operation object */],
+ "sources": [{STIX data}]
+ }
+}
+
In OpenCTI we propose 2 types of streams.
+The stream hosted in /stream url contains all the raw events of the platform, always filtered by the user rights (marking based). +It's a technical stream a bit complex to used but very useful for internal processing or some specific connectors like backup/restore. +This stream is live by default but if you want to catchup you can simply add the from parameter to your query. +This parameter accept a timestamp in millisecond and also an event id. +Like http://localhost/stream?from=1620249512599
+Stream size?
+The raw stream is really important in the platform and needs te be sized according to the period of retention you want to ensure. +More retention you will have, more security about reprocessing the past information you will get. +We usually recommand 1 month of retention, that usually match 2 000 000 of events. +This limit can be configured with redis:trimming option, please check deployment configuration page.
+This stream aims to simplify your usage of the stream through the connectors, providing a way to create stream with specific filters through the UI. +After creating this stream, is simply accessible from /stream/{STREAM_ID}.
+It's very useful for various cases of data externalization, synchronization, like SPLUNK, TANIUM...
+This stream provides different interesting mechanics:
+If you want to dig in about the internal behavior you can check this complete diagram:
+ + +From and recover are 2 different options that need to be explains.
+from (query parameter) is always the parameter that describe the initial date/event_id you want to start from. +Can also be setup with request header from or last-event-id
+recover (query parameter) is an option that let you consume the initial event from the database and not from the stream. +Can also be setup with request header recover or recover-date
+This difference will be transparent for the consumer but very important to get old information as an initial snapshot. +This also let you consume information that is no longer in the stream retention period.
+The next diagram will help you to understand the concept:
+ + + + + + + + + + + + + + + + + + + +Under construction
+We are doing our best to complete this page. +If you want to participate, don't hesitate to join the Filigran Community on Slack +or submit your pull request on the Github doc repository.
+Under construction
+We are doing our best to complete this page. +If you want to participate, don't hesitate to join the Filigran Community on Slack +or submit your pull request on the Github doc repository.
+Welcome to the OpenCTI Documentation space. Here you will be able to find all documents, meeting notes and presentations about the platform.
Release notes
Please, be sure to also take a look at the OpenCTI releases notes, they may contain important information about releases and deployments.
"},{"location":"#introduction","title":"Introduction","text":"OpenCTI is an open source platform allowing organizations to manage their cyber threat intelligence knowledge and observables. It has been created in order to structure, store, organize and visualize technical and non-technical information about cyber threats.
"},{"location":"#getting-started","title":"Getting started","text":"Deployment & Setup
Learn how to deploy and configure the platform as well as launch connectors to get the first data in OpenCTI.
Deploy now
User Guide
Understand how to use the platform, explore the knowledge, import and export information, create dashboard, etc.
Explore
Administration
Know how to administrate OpenCTI, create users and groups using RBAC / segregation, put retention policies and custom taxonomies.
Customize
Need more help?
We are doing our best to keep this documentation complete, accurate and up to date.
If you still have questions or you find something which is not sufficiently explained, join the Filigran Community on Slack.
"},{"location":"#latest-blog-posts","title":"Latest blog posts","text":"All tutorials are published directly on the Medium blog, this section provides a comprehensive list of the most important ones.
Introducing malware analysis: enhance your cybersecurity triage with OpenCTI Jul 22, 2023
As a cybersecurity analyst, you understand the importance of quickly identifying and analyzing suspicious or malicious files, URLs, and network traffic...
Read
OpenCTI case management is ready for takeoff: what is available and what\u2019s next Jul 3, 2023
As part of our 2023 strategic roadmap, we\u2019ve worked since January on the case management system within the OpenCTI platform. This initiative comes from 2 simple statements...
Read
Progressive rollout of the OpenCTI Enterprise Edition: why, what and how? June 10, 2023
We are thrilled to announce that, from OpenCTI 5.8, Filigran is now providing some customers with an Enterprise Edition of the platform, whether on-premise...
Read
Below, you will find external resources which may be useful along your OpenCTI journey.
OpenCTI Ecosystem List of available connectors and integrations to expand platform usage.
Training Courses Training courses for analysts and administrators in the Filigran training center.
Performances tests & metrics Regular performance tests based on default configuration and datasets.
"},{"location":"administration/csv-mappers/","title":"CSV Mappers","text":"In OpenCTI, CSV Mappers allow to parse CSV files in a STIX 2.1 Objects. The mappers are created and configured by users with the Manage CSV mappers capability and then made available to users who import CSV files, for instance inside a report or in the global import view, and want to extract information inside these files.
"},{"location":"administration/csv-mappers/#principles","title":"Principles","text":"The mapper contains representations of STIX 2.1 entities and relationships, in order for the parser to properly extract them. One mapper is dedicated to parsing a specific CSV file structure, and thus dedicated mappers should be created for each and every specific CSV structure you might need to ingest in the platform.
"},{"location":"administration/csv-mappers/#create-a-new-csv-mapper","title":"Create a new CSV Mapper","text":"In menu Data, select the submenu Processing, and on the right menu select CSV Mappers. You are presented with a list of all the mappers set in the platform. Note that you can delete or update any mapper from the context menu via the burger button beside each mapper.
Click on the button + in the bottom-right corner to add a new Mapper.
Enter a name for your mapper and some basic information about your CSV files:
Info
Note that the parser will not extract any information from the CSV header if any ; it will just skip the first line during parsing.
Then, you need to create every representation, one per entity and relationship type represented in the CSV file. Click on the + button to add an empty representation in the list, and click on the chevron to expand the section and configure the representation.
Depending on the entity type, the form contains the fields that are either required (input outlined in red) or optional. For each field, set the corresponding columns mapping (the letter-based index of the column in the CSV table, as presented in common spreadsheet tools).
References to other entities should be picked from the list of all the other representations already defined earlier in the mapper.
You can do the same for all the relationships between entities that might be defined in this particular CSV file structure.
Fields might have options besides the mandatory column index, to help extract relevant data.
+
or |
)The only parameter required to save a CSV Mapper is a name ; creating and refining its representations can be done iteratively.
All CSV Mappers go through a quick validation that checks if all the representations have all their mandatory fields set. Only valid mappers can be run by the users on their CSV files.
Mapper validity is visible in the list of CSV Mappers as shown below.
"},{"location":"administration/csv-mappers/#test-your-csv-mapper","title":"Test your CSV mapper","text":"In the creation or edition form, hit the button Test to open a dialog. Select a sample CSV file and hit the Test button.
The code block contains the raw result of the parsing attempt, in form of a STIX 2.1 bundle in JSON format.
You can then check if the extracted values match the expected entities and relationships.
"},{"location":"administration/csv-mappers/#use-a-mapper-for-importing-a-csv-file","title":"Use a mapper for importing a CSV file","text":"You can change the default configuration of the import csv connector in your configuration file.
\"import_csv_built_in_connector\": {\n\"enabled\": true, \"interval\": 10000, \"validate_before_import\": false\n},\n
In Data import section, or Data tab of an entity, when you upload a CSV, you can select a mapper to apply to the file. The file will then be parsed following the representation rules set in the mapper.
By default, the imported elements will be added in a new Analyst Workbench where you will be able to check the result of the import.
"},{"location":"administration/enterprise/","title":"Enterprise edition","text":"Filigran
Filigran is providing an Enterprise Edition of the platform, whether on-premise or in the SaaS.
"},{"location":"administration/enterprise/#what-is-opencti-ee","title":"What is OpenCTI EE?","text":"OpenCTI Enterprise Edition is based on the open core concept. This means that the source code of OCTI EE remains open source and included in the main GitHub repository of the platform but is published under a specific license. As precised in the GitHub license file:
The OpenCTI Community Edition is licensed under the Apache License, Version 2.0 (the \u201cApache License\u201d). The OpenCTI Enterprise Edition is licensed under the OpenCTI Non-Commercial License (the \u201cNon-Commercial License\u201d). The source files in this repository have a header indicating which license they are under. If no such header is provided, this means that the file is belonging to the Community Edition under the Apache License, Version 2.0.
We write a complete article to explain the enterprise edition, feel free to read it to have more information
"},{"location":"administration/enterprise/#ee-activation","title":"EE Activation","text":"Enterprise edition is easy to activate. You need to go the the platform settings and click on the Activate button.
Then you will need to agree to the Filigran EULA.
As a reminder:
Audit logs help you answer \"who did what, where, and when?\" within your data with the maximum level of transparency. Please read Activity monitoring page to get all information.
"},{"location":"administration/enterprise/#playbooks-and-automation","title":"Playbooks and automation","text":"OpenCTI playbooks are flexible automation scenarios which can be fully customized and enabled by platform administrators to enrich, filter and modify the data created or updated in the platform. Please read Playbook automation page to get all information.
"},{"location":"administration/enterprise/#organizations-management-and-segregation","title":"Organizations management and segregation","text":"Organizations segregation is a way to segregate your data considering the organization associated to the users. Useful when your platform aims to share date to multiple organizations that share the access to the same OpenCTI platform. See Organizations RBAC
"},{"location":"administration/enterprise/#more-to-come","title":"More to come","text":"More feature will be available in OpenCTI in the future. Features like:
The following chapter aims at giving the reader an understanding of possible options by entity type. Customize entities can be done in \u00ab Settings \u00bb \u2192 \u00ab Customization \u00bb.
"},{"location":"administration/entities/#hidden-in-interface","title":"Hidden in interface","text":"This configuration hides a specific entity type across the entire platform. It is a powerfull way to simplify the interface and focus on your domain expertise. For example, if you are not interested in disinformation campaign, you can hide related entities like Narratives and Channels from the menus.
You can define which Entities to hide platform-wide from \u00ab Settings \u00bb \u2192 \u00ab Customization \u00bb, and also from \u00ab Settings \u00bb \u2192 \u00ab Parameters \u00bb giving you a list of hidden entities.
You can also define hidden entities for specific users Groups and users Organizations, from \u00ab Settings \u00bb \u2192 \u00ab Security \u00bb \u2192 \u00ab Groups/Organizations \u00bb and editing a Group/Organization.
An overview is available in Parameters > Hidden entity types.
"},{"location":"administration/entities/#automatic-references-at-file-upload","title":"Automatic references at file upload","text":"This configuration enables an entity to automatically construct an external reference from the uploaded file.
"},{"location":"administration/entities/#enforce-references","title":"Enforce references","text":"This configuration enables the requirement of a reference message on an entity creation or modification. This option is helpfull if you want to keep a strong consistency and traceability of your Knowledge and is well suited for manual creation and update.
"},{"location":"administration/entities/#workflow","title":"Workflow","text":"For now, OpenCTI have a simple workflow approach.
The available status for an entity is first defined by a collection of status templates (that can be defined from \u00ab Settings \u00bb \u2192 \u00ab Taxonomies \u00bb \u2192 \u00ab Status Template \u00bb).
Then, a workflow can be defined by ordering a sequence of status template.
"},{"location":"administration/entities/#attributes","title":"Attributes","text":"In an Entity, each attribute offers some customization options :
Confidence scale can be customized for each entity type by selecting another scale template or by editing directly the scale values. Once you have customized your scale, click on \"Update\" to save your configuration.
"},{"location":"administration/introduction/","title":"Introduction","text":"This guide aims to give you a full overview of the OpenCTI features and workflows. The platform can be used in various contexts to handle threats management use cases from a technical to a more strategic level.
"},{"location":"administration/introduction/#administrative-settings","title":"Administrative Settings","text":"The OpenCTI Administrative settings console allows administrators to configure many options dynamically within the system. As an Administrator, you can access this settings console, by clicking the settings link.
The Settings Console allows for configuration of various aspects of the system.
"},{"location":"administration/introduction/#general-configuration","title":"General Configuration","text":"Various aspects of the Dark Theme can be dynamically configured in this section.
"},{"location":"administration/introduction/#light-theme-color-scheme","title":"Light Theme Color Scheme","text":"Various aspects of the Light Theme can be dynamically configured in this section.
"},{"location":"administration/introduction/#tools-configuration-display","title":"Tools Configuration Display","text":"This section will give general status on the various tools and enabled components of the currently configured OpenCTI deployment.
"},{"location":"administration/merging/","title":"Merging","text":""},{"location":"administration/merging/#data-merging","title":"Data merging","text":"Within the OpenCTI platform, the merge capability is present into the \"Data > Entities\" tab, and is fairly straightforward to use. To execute a merge, select the set of entities to be merged, then click on the Merge icon. NB: it is not possible to merge entities of different types, nor is it possible to merge more than 4 entities at a time (it will have to be done in several stages).
Central to the merging process is the selection of a main entity. This primary entity becomes the anchor, retaining crucial attributes such as name and description. Other entities, while losing specific fields like descriptions, are aliased under the primary entity. This strategic decision preserves vital data while eliminating redundancy.
Once the choice has been made, simply validate to run the task in the background. Depending on the number of entity relationships, and the current workload on the platform, the merge may take more or less time. In the case of a healthy platform and around a hundred relationships per entity, merge is almost instantaneous.
"},{"location":"administration/merging/#data-preservation-and-relationship-continuity","title":"Data preservation and relationship continuity","text":"A common concern when merging entities lies in the potential loss of information. In the context of OpenCTI, this worry is alleviated. Even if the merged entities were initially created by distinct sources, the platform ensures that data is not lost. Upon merging, the platform automatically generates relationships directly on the merged entity. This strategic approach ensures that all connections, regardless of their origin, are anchored to the consolidated entity. Post-merge, OpenCTI treats these once-separate entities as a singular, unified entity. Subsequent information from varied sources is channeled directly into the entity resulting from the merger. This unified entity becomes the focal point for all future relationships, ensuring the continuity of data and relationships without any loss or fragmentation.
"},{"location":"administration/merging/#important-considerations","title":"Important considerations","text":"Under construction
We are doing our best to complete this page. If you want to participate, don't hesitate to join the Filigran Community on Slack or submit your pull request on the Github doc repository.
"},{"location":"administration/parameters/","title":"Parameters","text":""},{"location":"administration/parameters/#description","title":"Description","text":"This part of the interface wil let you configure global platform settings, like title, favicon, etc.
It will also give you important information about the platform.
"},{"location":"administration/parameters/#configuration","title":"Configuration","text":"Configure global platform settings, like title, favicon, etc.
"},{"location":"administration/parameters/#opencti-platform","title":"OpenCTI Platform","text":"Important information about the platform.
It's also the place to activate the Enterprise edition
"},{"location":"administration/parameters/#platform-announcement","title":"Platform Announcement","text":"This section gives you the possibility to set and display Announcements in the platform. Those announcements will be visible to every user in the platform, on top of the interface.
They can be used to inform all your users' community of important information, like a scheduled downtime, an incoming upgrade, or even an important tips regarding the usage of the platform.
An Announcement can be accompanied by a \"Dismiss\u201d button. When click by a user, it makes the message disappear for this user.
This option can be deactivated to have a permanent Announcement.
\u26a0\ufe0f Only one Announcement is displayed at a time. Dismissible Announcements are displayed first, then the latest not dismissible Announcement.
"},{"location":"administration/parameters/#analytics","title":"Analytics","text":"Enterprise edition
Analytics is available under the \"Filigran entreprise edition\" license.
Please read the dedicated page to have all information
Configure analytics providers (at the moment only Google analytics v4).
"},{"location":"administration/policies/","title":"Policies","text":""},{"location":"administration/policies/#platform-main-organization","title":"Platform main organization","text":"Allow to set a main organization for the entire platform.
All the pieces of knowledge must be shared with the organization of the user wishing to access it or this user need to be inside the main organization.
"},{"location":"administration/policies/#authentication-strategies","title":"Authentication Strategies","text":"There are several authentication strategies to connect to the platform.
Please see the Authentication section for further details.
"},{"location":"administration/policies/#local-password-policies","title":"Local Password Policies","text":"Allow to define the password policy according to several criteria in order to strengthen the security of your platform, namely: minimum/maximum number of characters, number of digits, etc.
"},{"location":"administration/policies/#login-messages","title":"Login Messages","text":"Allow to define login, consent and consent confirm message to customize and highlight your platform's security policy
"},{"location":"administration/policies/#platform-banner-configuration","title":"Platform Banner Configuration","text":"Allow OpenCTI deployments to have a custom banner message (top and bottom) and colored background for the message (Green, Red, or Yellow). Can be used to add a disclaimer or system purpose that will be displayed at the top and bottom of the OpenCTI instances pages.
This configuration has two parameters:
The rules engine comprises a set of predefined rules (named inference rules) that govern how new relationships are inferred based on existing data. These rules are carefully crafted to ensure logical and accurate relationship creation. Here is the list of existing inference rules:
"},{"location":"administration/reasoning/#raise-incident-based-on-sighting","title":"Raise incident based on sighting","text":"Conditions Creations A non-revoked Indicator is sighted in an Entity Creation of an Incident linked to the sighted Indicator and the targeted Entity"},{"location":"administration/reasoning/#sightings-of-observables-via-observed-data","title":"Sightings of observables via observed data","text":"Conditions Creations An Indicator is based on an Observable contained in an Observed Data Creation of a sighting between the Indicator and the creating Identity of the Observed Data"},{"location":"administration/reasoning/#sightings-propagation-from-indicator","title":"Sightings propagation from indicator","text":"Conditions Creations An Indicator based on an Observable is sighted in an Entity The Observable is sighted in the Entity"},{"location":"administration/reasoning/#sightings-propagation-from-observable","title":"Sightings propagation from observable","text":"Conditions Creations An Indicator is based on an Observable sighted in an Entity The Indicator is sighted in the Entity"},{"location":"administration/reasoning/#relation-propagation-via-an-observable","title":"Relation propagation via an observable","text":"Conditions Creations An observable is related to two Entities Create a related to relationship between the two Entities"},{"location":"administration/reasoning/#attribution-propagation","title":"Attribution propagation","text":"Conditions Creations An Entity A is attributed to an Entity B and this Entity B is itself attributed to an Entity C The Entity A is attributed to Entity C"},{"location":"administration/reasoning/#belonging-propagation","title":"Belonging propagation","text":"Conditions Creations An Entity A is part of an Entity B and this Entity B is itself part of an Entity C The Entity A is part of Entity C"},{"location":"administration/reasoning/#location-propagation","title":"Location propagation","text":"Conditions Creations A Location A is located at a Location B and this Location B is itself located at a Location C The Location A is located at Location C"},{"location":"administration/reasoning/#organization-propagation-via-participation","title":"Organization propagation via participation","text":"Conditions Creations A User is affiliated with an Organization B, which is part of an Organization C The User is affiliated to the Organization C"},{"location":"administration/reasoning/#identities-propagation-in-reports","title":"Identities propagation in reports","text":"Conditions Creations A Report contains an Identity B and this Identity B is part of an Identity C The Report contains Identity C, as well as the Relationship between Identity B and Identity C"},{"location":"administration/reasoning/#locations-propagation-in-reports","title":"Locations propagation in reports","text":"Conditions Creations A Report contains a Location B and this Location B is located at a Location C The Report contains Location B, as well as the Relationship between Location B and Location C"},{"location":"administration/reasoning/#observables-propagation-in-reports","title":"Observables propagation in reports","text":"Conditions Creations A Report contains an Indicator and this Indicator is based on an Observable The Report contains the Observable, as well as the Relationship between the Indicator and the Observable"},{"location":"administration/reasoning/#usage-propagation-via-attribution","title":"Usage propagation via attribution","text":"Conditions Creations An Entity A, attributed to an Entity C, uses an Entity B The Entity C uses the Entity B"},{"location":"administration/reasoning/#inference-of-targeting-via-a-sighting","title":"Inference of targeting via a sighting","text":"Conditions Creations An Indicator, sighted at an Entity C, indicates an Entity B The Entity B targets the Entity C"},{"location":"administration/reasoning/#targeting-propagation-via-attribution","title":"Targeting propagation via attribution","text":"Conditions Creations An Entity A, attributed to an Entity C, targets an Entity B The Entity C targets the Entity B"},{"location":"administration/reasoning/#targeting-propagation-via-belonging","title":"Targeting propagation via belonging","text":"Conditions Creations An Entity A targets an Identity B, part of an Identity C The Entity A targets the Identity C"},{"location":"administration/reasoning/#targeting-propagation-via-location","title":"Targeting propagation via location","text":"Conditions Creations An Entity targets a Location B and this Location B is located at a Location C The Entity targets the Location C"},{"location":"administration/reasoning/#targeting-propagation-when-located","title":"Targeting propagation when located","text":"Conditions Creations An Entity A targets an Entity B and this target is located at Location D. The Entity A targets the Location D"},{"location":"administration/reasoning/#rule-execution","title":"Rule execution","text":""},{"location":"administration/reasoning/#rule-activation","title":"Rule activation","text":"When a rule is activated, a background task is initiated. This task scans all platform data, identifying existing relationships that meet the conditions defined by the rule. Subsequently, it creates new objects (entities and/or relationships), expanding the network of insights within your threat intelligence environment. Then, activated rules operate continuously. Whenever a relationship is created or modified, and this change aligns with the conditions specified in an active rule, the reasoning mechanism is triggered. This ensures real-time relationship inference.
"},{"location":"administration/reasoning/#rule-deactivation","title":"Rule deactivation","text":"Deactivating a rule leads to the deletion of all objects and relationships created by it. This cleanup process maintains the accuracy and reliability of your threat intelligence database.
"},{"location":"administration/reasoning/#access-restrictions-and-data-impact","title":"Access restrictions and data impact","text":"Access to the rule engine panel is restricted to administrators only. Regular users do not have visibility into this section of the platform. Administrators possess the authority to activate or deactivate rules.
The rules engine empowers OpenCTI with the capability to automatically establish intricate relationships within your data. However, these rules can lead to a very large number of objects created. Even if the operation is reversible, an administrator should consider the impact of activating a rule.
"},{"location":"administration/reasoning/#additional-resources","title":"Additional resources","text":"Under construction
We are doing our best to complete this page. If you want to participate, don't hesitate to join the Filigran Community on Slack or submit your pull request on the Github doc repository.
"},{"location":"administration/segregation/","title":"Data segregation","text":""},{"location":"administration/segregation/#introduction","title":"Introduction","text":"Data segregation in the context of Cyber Threat Intelligence refers to the practice of categorizing and separating different types of data or information related to cybersecurity threats based on specific criteria.
This separation helps organizations manage and analyze threat intelligence more effectively and securely and the goal of data segregation is to ensure that only those individuals who are authorized to view a particular set of data have access to that set of data.
Practically, \"Need-to-know basis\" and \"classification level\" are data segregation measures.
"},{"location":"administration/segregation/#marking-definitions","title":"Marking Definitions","text":""},{"location":"administration/segregation/#description","title":"Description","text":"Marking definitions are essential in the context of data segregation to ensure that data is appropriately categorized and protected based on its sensitivity or classification level. Marking definitions establish a standardized framework for classifying data.
Marking Definition objects are unique among STIX objects in the STIX 2.1 standard in that they cannot be versioned. This restriction is in place to prevent the possibility of indirect alterations to the markings associated with a STIX Object.
Multiple markings can be added to the same object. Certain categories of marking definitions or trust groups may enforce rules that specify which markings take precedence over others or how some markings can be added to complement existing ones.
In OpenCTI, data is segregated based on knowledge marking. The diagram provided below illustrates the manner in which OpenCTI establishes connections between pieces of information to authorize data access for a user:
"},{"location":"administration/segregation/#traffic-light-protocol","title":"Traffic Light Protocol","text":"The Traffic Light Protocol is implemented by default as marking definitions in OpenCTI. It allows you to segregate information by TLP level in your platform and restrict access to marked data if users are not authorized to see the corresponding marking.
The Traffic Light Protocol (TLP) was designed by the Forum of Incidence Response and Security Teams (FIRST) to provide a standardized method for classifying and handling sensitive information, based on four categories of sensitivity.
For more details, the diagram provided below illustrates how are categorized the marking definitions:
"},{"location":"administration/segregation/#create-new-markings","title":"Create new markings","text":"In order to create a marking, you must first have the ability to access the Settings tab. For example, a user who is in a group with the role of Administrator can bypass all capabilities or a user who is in a group with the role that has Access administration
checked can access the Settings tab. For more details about user administration here: Users and Role Based Access Control
Once you have access to the settings, you can create your new marking in Security
-> Marking Definitions
A marking has:
In order for all users in a group to be able to see entities and relationships that have specific markings on them, allowed markings can be checked when updating a group:
"},{"location":"administration/segregation/#default-marking-definitions","title":"Default marking definitions","text":"To apply a default marking when creating a new entity or relationship, you can choose which marking to add by default from the list of allowed markings. You can add only one marking per type, but you can have multiple types.
Be careful, add markings as default markings is not enough to see the markings when you create an entity or relationship, you need to enable default markings in an entity or relationship customization.
For example, if you create a new report, got to Settings
-> Customization
-> Report
-> Markings
and click on Activate/Desactivate default values
To authorize a group to automatically have access to a newly created marking definition in allowed marking definitions, you can check Automatically authorize this group to new marking definition
when update a group:
When a new entity or a new relationship is created, if multiple markings of the same type and different order are added, the platform will only keep the highest order for each type.
For example:
Create a new report and add markings PAP:AMBER
,PAP:RED
,TLP:AMBER+STRICT
,TLP:CLEAR
and a statement CC-BY-SA-4.0 DISARM Foundation
The final markings kept are: PAP:RED
, TLP:AMBER+STRICT
and CC-BY-SA-4.0 DISARM Foundation
When update an entity or a relationship:
When you merge multiple entities, the platform will keep the highest order for each type of markings when the merge is complete:
For example, merging 2 observables, one with TLP:CLEAR
and PAP:CLEAR
and the other one with PAP:RED
and TLP:GREEN
from 198.250.250.11 to 197.250.251.12.
As final result, you will have the observable with the value 197.250.251.12 with PAP:RED
and TLP:GREEN
When you import data from a connector, the connector cannot downgrade a marking for the same entity, if a same type of marking is set on it.
For example, if you create a new observable with same values as Alien Vault data and change marking in the platform as TLP:AMBER
, when importing data, the platform will keep the highest rank for the same type of markings.
Under construction
We are doing our best to complete this page. If you want to participate, don't hesitate to join the Filigran Community on Slack or submit your pull request on the Github doc repository.
"},{"location":"administration/users/","title":"Users and Role Based Access Control","text":""},{"location":"administration/users/#introduction","title":"Introduction","text":"In OpenCTI, the RBAC system not only related to what users can do or cannot do in the platform (aka. Capabilities
) but also to the system of data segregation. Also, platform behaviour such as default home dashboards, default triggers and digests as well as default hidden menus or entities can be defined across groups and organizations.
Roles are used in the platform to grant the given groups with some capabilities to define what users in those groupes can do or cannot do.
"},{"location":"administration/users/#list-of-capabilities","title":"List of capabilities","text":"Capability DescriptionBypass all capabilities
Just bypass everything including data segregation and enforcements. Access knowledge
Access in read-only to all the knowledge in the platform. Access to collaborative creation
Create notes and opinions (and modify its own) on entities and relations. Create / Update knowledge
Create and update existing entities and relationships. Restrict organization access
Share entities and relationships with other organizations. Delete knowledge
Delete entities and relationships. Upload knowledge files
Upload files in the Data
and Content
section of entities. Download knowledge export
Download the exports generated in the entities (in the Data
section). Ask for knowledge enrichment
Trigger an enrichment for a given entity. Access exploration
Access to workspaces whether custom dashboards or investigations. Create / Update exploration
Create and update existing workspaces whether custom dashboards or investigations. Delete exploration
Delete workspaces whether custom dashboards or investigations. Access connectors
Read information in the Data > Connectors
section. Manage connector state
Reset the connector state to restart ingestion from the beginning. Access Taxii feed
Access and consume TAXII collections. Manage Taxii collections
Create, update and delete TAXII collections. Manage CSV mappers
Create, update and delete CSV mappers. Access administration
Access and manage overall parameters of the platform in Settings > Parameters
. Manage credentials
Access and manage roles, groups, users, organizations and security policies. Manage marking definitions
Update and delete marking definitions. Manage labels & Attributes
Update and delete labels, custom taxonomies, workflow and case templates. Connectors API usage: register, ping, export push ...
Connectors specific permissions for register, ping, push export files, etc. Connect and consume the platform streams (/stream, /stream/live)
List and consume the OpenCTI live streams. Bypass mandatory references if any
If external references enforced in a type of entity, be able to bypass the enforcement."},{"location":"administration/users/#manage-roles","title":"Manage roles","text":"You can manage the roles in Settings > Security > Roles
.
To create a role, just click on the +
button:
Then you will be able to define the capabilities of the role:
"},{"location":"administration/users/#users","title":"Users","text":"You can manage the users in Settings > Security > Users
. If you are using Single-Sign-On (SSO), the users in OpenCTI are automatically created upon login.
To create a user, just click on the +
button:
When access to a user, it is possible to:
Groups is the main vehicule to manage permissions and data segregation as well as platform customization for the given users part of this group. You can manage the groups in Settings > Security > Groups
.
Here is the description of the group available parameters.
Parameter DescriptionAuto new markings
If a new marking definition is created, this group will automatically be granted to it. Default membership
If a new user is created (manually or upon SSO), it will be added to this group. Roles
Roles and capabilities granted to the users belonging to this group. Default dashboard
Customize the home dashboard for the users belonging to this group. Default markings
In Settings > Customization > Entity types
, if default marking definitions is enabled, default markings of the group is used. Allowed markings
Grant access to the group to the defined marking definitions, more details in data segregation. Triggers and digests
Define defaults triggers and digests for the users belonging to this group. "},{"location":"administration/users/#manage-a-group","title":"Manage a group","text":"When managing a group, you can define the members and all above configurations.
"},{"location":"administration/users/#organizations","title":"Organizations","text":"Users can belong to organizations, which is an additional layer of data segregation and customization.
"},{"location":"administration/users/#organization-administration","title":"Organization administration","text":"Plateform administrators can promote members of an organization as \"Organization administrator\". This elevated role grants them the necessary capabilities to create, edit and delete users from the corresponding Organization. Additionally, administrators have the flexibility to define a list of groups that can be granted to newly created members by the organization administrators. This feature simplifies the process of granting appropriate access and privileges to individuals joining the organization.
The platform administrator can promote/demote an organization admin through its user edition form.
The \"Organization admin\" has restricted access to Settings. They can only manage the members of the organizations for which they have been promoted as \"admins\".
"},{"location":"administration/audit/configuration/","title":"Configuration","text":"Enterprise edition
Activity unified interface and logging are available under the \"Filigran entreprise edition\" license.
Please read the dedicated page to have all information
As explained in overview page, all administration actions are listen by default. However, all knowledge are not listened by default due to performance impact on the platform.
For this reason you need to explicitly activate extended listening on user / group or organization.
Listening will start just after the configuration. Every past events will not be taken into account.
"},{"location":"administration/audit/events/","title":"Events","text":"Enterprise edition
Activity unified interface and logging are available under the \"Filigran entreprise edition\" license.
Please read the dedicated page to have all information
"},{"location":"administration/audit/events/#description","title":"Description","text":"OpenCTI activity capability is the way to unified whats really happen in the platform. In events section you will have access to the UI that will answer to \"who did what, where, and when?\" within your data with the maximum level of transparency.
"},{"location":"administration/audit/events/#include-knowledge","title":"Include knowledge","text":"By default, the events screen only show you the administration actions done by the users.
If you want to see also the information about the knowledge, you can simply activate the filter in the bar to get the complete overview of all user actions.
Don't hesitate to read again the overview page to have a better understanding of the difference between Audit, Basic/Extended knowledge.
"},{"location":"administration/audit/overview/","title":"Overview","text":""},{"location":"administration/audit/overview/#overview","title":"Overview","text":"Enterprise edition
Activity unified interface and logging are available under the \"Filigran entreprise edition\" license.
Please read the dedicated page to have all information
OpenCTI activity capability is the way to unified whats really happen in the platform. With this feature you will be able to answer \"who did what, where, and when?\" within your data with the maximum level of transparency. Enabling activity helps your security, auditing, and compliance entities monitor platform for possible vulnerabilities or external data misuse.
"},{"location":"administration/audit/overview/#categories","title":"Categories","text":"The activity group 3 different concepts that need to be explains.
"},{"location":"administration/audit/overview/#basic-knowledge","title":"Basic knowledge","text":"The basic knowledge refers to all stix data knowledge inside OpenCTI. Every create/update/delete actions on that knowledge is accessible through the history. That basic activity is handled by the history manager and can be also found directly on each entity.
"},{"location":"administration/audit/overview/#extended-knowledge","title":"Extended knowledge","text":"The extended knowledge refers to extra information data to track specific user activity. As this kind of tracking is expensive, the tracking will only be done for specific user/group/organization explicitly configured.
"},{"location":"administration/audit/overview/#audit-knowledge","title":"Audit knowledge","text":"Audit is focusing on user administration or security actions. Audit will produces console/logs files along with user interface elements.
{\n\"auth\": \"<User information>\",\n\"category\": \"AUDIT\",\n\"level\": \"<info | error>\",\n\"message\": \"<human readable explanation>\",\n\"resource\": {\n\"type\": \"<authentication | mutation>\",\n\"event_scope\": \"<depends on type>\",\n\"event_access\": \"<administration>\",\n\"data\": \"<contextual data linked to the event type>\",\n\"version\": \"<version of audit log format>\"\n},\n\"timestamp\": \"<event date>\",\n\"version\": \"<platform version>\"\n}\n
"},{"location":"administration/audit/overview/#architecture","title":"Architecture","text":"OpenCTI use different mechanisms to be able to publish actions (audit) or data modification (history)
"},{"location":"administration/audit/overview/#audit-knowledge_1","title":"Audit knowledge","text":"Administration or security actions
With Enterprise edition activated, Administration and security actions are always written; you can't configure, exclude, or disable them
Supported
Not supported for now
Not applicable
"},{"location":"administration/audit/overview/#ingestion","title":"Ingestion","text":"Create Delete Edit Remote OCTI Streams"},{"location":"administration/audit/overview/#data-sharing","title":"Data sharing","text":"Create Delete Edit CSV Feeds TAXII Feeds Stream Feeds"},{"location":"administration/audit/overview/#connectors","title":"Connectors","text":"Create Delete Edit Connectors State reset Works"},{"location":"administration/audit/overview/#parameters","title":"Parameters","text":"Create Delete Edit Platform parameters"},{"location":"administration/audit/overview/#security","title":"Security","text":"Create Delete Edit Roles Groups Users Sessions Policies"},{"location":"administration/audit/overview/#customization","title":"Customization","text":"Create Delete Edit Entity types Rules engine Retention policies"},{"location":"administration/audit/overview/#taxonomies","title":"Taxonomies","text":"Create Delete Edit Status templates Case templates + tasks"},{"location":"administration/audit/overview/#accesses","title":"Accesses","text":"Listen Login (success or fail) Logout Unauthorized access"},{"location":"administration/audit/overview/#extended-knowledge_1","title":"Extended knowledge","text":"Extended knowledge
Extented knowledge activity are written only if you activate the feature for a subset of users / groups or organizations
"},{"location":"administration/audit/overview/#data-management","title":"Data management","text":"Some history actions are already included in the \"basic knowledge\". (basic marker)
Read Create Delete Edit Platform knowledge basic basic basic Background tasks Knowledge Knowledge files basic basic Global data import files Analyst workbenches files Triggers Workspaces Investigations User profile"},{"location":"administration/audit/overview/#user-actions","title":"User actions","text":"Supported Ask for file import Ask for data enrichment Ask for export generation Execute global search"},{"location":"administration/audit/triggers/","title":"Activity triggers","text":"Enterprise edition
Activity unified interface and logging are available under the \"Filigran entreprise edition\" license.
Please read the dedicated page to have all information
"},{"location":"administration/audit/triggers/#description","title":"Description","text":"Having all the history in the user interface (events) its sometimes not enough to have a proactive monitoring. For this reason you can configure some specific triggers to receive notifications on audit events. You can configure like personal triggers, lives one that will be sent directly or digest depending on your needs.
"},{"location":"administration/audit/triggers/#configuration","title":"Configuration","text":"In this kind of trigger you will have to configure different options: - Notification target: User interface or email - Recipients: who will receive the notification - Filters: a set of filters to get only events that really interested you. (who is responsible for this event, kind of events, ...)
"},{"location":"administration/audit/triggers/#event-structure","title":"Event structure","text":"In order to correctly configure the filters, here's a definition of the event structure
authentication
Event scopes: login
and logout
Event type: read
read
and unauthorized
Event type: file
read
, create
and delete
Event type: mutation
unauthorized
, update
, create
and delete
Event type: command
search
, enrich
, import
and export
OpenCTI supports several authentication providers. If you configure multiple strategies, they will be tested in the order you declared them.
Activation
You need to configure/activate only that you really want to propose to your users in term of authentication
The product proposes two kind of authentication strategy:
Under the hood we technically use the strategies provided by PassportJS. We integrate a subset of the strategies available with passport we if you need more we can theatrically integrate all the passport strategies.
"},{"location":"deployment/authentication/#local-users-form","title":"Local users (form)","text":"This strategy used the OpenCTI database as user management.
OpenCTI use this strategy as the default but its not the one we recommend for security reason.
\"local\": {\n\"strategy\": \"LocalStrategy\",\n\"config\": {\n\"disabled\": false\n}\n}\n
Production deployment
Please use the LDAP/Auth0/OpenID/SAML strategy for production deployment.
"},{"location":"deployment/authentication/#ldap-form","title":"LDAP (form)","text":"This strategy can be used to authenticate your user with your company LDAP and is based on Passport - LDAPAuth.
\"ldap\": {\n\"strategy\": \"LdapStrategy\",\n\"config\": {\n\"url\": \"ldaps://mydc.domain.com:686\",\n\"bind_dn\": \"cn=Administrator,cn=Users,dc=mydomain,dc=com\",\n\"bind_credentials\": \"MY_STRONG_PASSWORD\",\n\"search_base\": \"cn=Users,dc=mydomain,dc=com\",\n\"search_filter\": \"(cn={{username}})\",\n\"mail_attribute\": \"mail\",\n// \"account_attribute\": \"givenName\",\n// \"firstname_attribute\": \"cn\",\n// \"lastname_attribute\": \"cn\",\n\"account_attrgroup_search_filteribute\": \"givenName\",\n\"allow_self_signed\": true\n}\n}\n
If you would like to use LDAP groups to automatically associate LDAP groups and OpenCTI groups/organizations:
\"ldap\": {\n\"config\": {\n...\n\"group_search_base\": \"cn=Groups,dc=mydomain,dc=com\",\n\"group_search_filter\": \"(member={{dn}})\",\n\"groups_management\": { // To map LDAP Groups to OpenCTI Groups\n\"group_attribute\": \"cn\",\n\"groups_mapping\": [\"LDAP_Group_1:OpenCTI_Group_1\", \"LDAP_Group_2:OpenCTI_Group_2\", ...]\n},\n\"organizations_management\": { // To map LDAP Groups to OpenCTI Organizations\n\"organizations_path\": \"cn\",\n\"organizations_mapping\": [\"LDAP_Group_1:OpenCTI_Organization_1\", \"LDAP_Group_2:OpenCTI_Organization_2\", ...]\n}\n}\n}\n
"},{"location":"deployment/authentication/#saml-button","title":"SAML (button)","text":"This strategy can be used to authenticate your user with your company SAML and is based on Passport - SAML.
\"saml\": {\n\"identifier\": \"saml\",\n\"strategy\": \"SamlStrategy\",\n\"config\": {\n\"issuer\": \"mytestsaml\",\n// \"account_attribute\": \"nameID\",\n// \"firstname_attribute\": \"nameID\",\n// \"lastname_attribute\": \"nameID\",\n\"entry_point\": \"https://auth.mydomain.com/auth/realms/mydomain/protocol/saml\",\n\"saml_callback_url\": \"http://localhost:4000/auth/saml/callback\",\n// \"private_key\": \"MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwg...\",\n\"cert\": \"MIICmzCCAYMCBgF2Qt3X1zANBgkqhkiG9w0BAQsFADARMQ8w...\",\n\"logout_remote\": false\n}\n}\n
For the SAML strategy to work:
cert
parameter is mandatory (PEM format) because it is used to validate the SAML response.private_key
(PEM format) is optional and is only required if you want to sign the SAML client request.Certificates
Be careful to put the cert
/ private_key
key in PEM format. Indeed, a lot of systems generally export the the keys in X509 / PCKS12 formats and so you will need to convert them. Here is an example to extract PEM from PCKS12:
openssl pkcs12 -in keystore.p12 -out newfile.pem -nodes\n
Here is an example of SAML configuration using environment variables:
- PROVIDERS__SAML__STRATEGY=SamlStrategy - \"PROVIDERS__SAML__CONFIG__LABEL=Login with SAML\"\n- PROVIDERS__SAML__CONFIG__ISSUER=mydomain\n- PROVIDERS__SAML__CONFIG__ENTRY_POINT=https://auth.mydomain.com/auth/realms/mydomain/protocol/saml\n- PROVIDERS__SAML__CONFIG__SAML_CALLBACK_URL=http://opencti.mydomain.com/auth/saml/callback\n- PROVIDERS__SAML__CONFIG__CERT=MIICmzCCAYMCBgF3Rt3X1zANBgkqhkiG9w0BAQsFADARMQ8w\n- PROVIDERS__SAML__CONFIG__LOGOUT_REMOTE=false\n
OpenCTI support mapping SAML Roles/Groups on OpenCTI Groups. Here is an example:
\"saml\": {\n\"config\": {\n...,\n// Groups mapping\n\"groups_management\": { // To map SAML Groups to OpenCTI Groups\n\"group_attributes\": [\"Group\"],\n\"groups_mapping\": [\"SAML_Group_1:OpenCTI_Group_1\", \"SAML_Group_2:OpenCTI_Group_2\", ...]\n},\n\"groups_management\": { // To map SAML Roles to OpenCTI Groups\n\"group_attributes\": [\"Role\"],\n\"groups_mapping\": [\"SAML_Role_1:OpenCTI_Group_1\", \"SAML_Role_2:OpenCTI_Group_2\", ...]\n},\n// Organizations mapping\n\"organizations_management\": { // To map SAML Groups to OpenCTI Organizations\n\"organizations_path\": [\"Group\"],\n\"organizations_mapping\": [\"SAML_Group_1:OpenCTI_Organization_1\", \"SAML_Group_2:OpenCTI_Organization_2\", ...]\n},\n\"organizations_management\": { // To map SAML Roles to OpenCTI Organizations\n\"organizations_path\": [\"Role\"],\n\"organizations_mapping\": [\"SAML_Role_1:OpenCTI_Organization_1\", \"SAML_Role_2:OpenCTI_Organization_2\", ...]\n}\n}\n}\n
Here is an example of SAML Groups mapping configuration using environment variables:
- \"PROVIDERS__SAML__CONFIG__GROUPS_MANAGEMENT__GROUP_ATTRIBUTES=[\\\"Group\\\"]\"\n- \"PROVIDERS__SAML__CONFIG__GROUPS_MANAGEMENT__GROUPS_MAPPING=[\\\"SAML_Group_1:OpenCTI_Group_1\\\", \\\"SAML_Group_2:OpenCTI_Group_2\\\", ...]\"\n
"},{"location":"deployment/authentication/#auth0-button","title":"Auth0 (button)","text":"This strategy allows to use Auth0 Service to handle the authentication and is based on Passport - Auth0.
\"authzero\": {\n\"identifier\": \"auth0\",\n\"strategy\": \"Auth0Strategy\",\n\"config\": {\n\"clientID\": \"XXXXXXXXXXXXXXXXXX\",\n\"baseURL\": \"https://opencti.mydomain.com\",\n\"clientSecret\": \"XXXXXXXXXXXXXXXXXX\",\n\"callback_url\": \"https://opencti.mydomain.com/auth/auth0/callback\",\n\"domain\": \"mycompany.eu.auth0.com\",\n\"audience\": \"XXXXXXXXXXXXXXX\",\n\"scope\": \"openid email profile XXXXXXXXXXXXXXX\",\n\"logout_remote\": false\n}\n}\n
Here is an example of Auth0 configuration using environment variables:
- PROVIDERS__AUTHZERO__STRATEGY=Auth0Strategy\n- PROVIDERS__AUTHZERO__CONFIG__CLIENT_ID=${AUTH0_CLIENT_ID}\n- PROVIDERS__AUTHZERO__CONFIG__BASEURL=${AUTH0_BASE_URL}\n- PROVIDERS__AUTHZERO__CONFIG__CLIENT_SECRET=${AUTH0_CLIENT_SECRET}\n- PROVIDERS__AUTHZERO__CONFIG__CALLBACK_URL=${AUTH0_CALLBACK_URL}\n- PROVIDERS__AUTHZERO__CONFIG__DOMAIN=${AUTH0_DOMAIN}\n- PROVIDERS__AUTHZERO__CONFIG__SCOPE=\"openid email profile\"\n- PROVIDERS__AUTHZERO__CONFIG__LOGOUT_REMOTE=false\n
"},{"location":"deployment/authentication/#openid-connect-button","title":"OpenID Connect (button)","text":"This strategy allows to use the OpenID Connect Protocol to handle the authentication and is based on Node OpenID Client that is more powerful than the passport one.
\"oic\": {\n\"identifier\": \"oic\",\n\"strategy\": \"OpenIDConnectStrategy\",\n\"config\": {\n\"label\": \"Login with OpenID\",\n\"issuer\": \"https://auth.mydomain.com/auth/realms/mydomain\",\n\"client_id\": \"XXXXXXXXXXXXXXXXXX\",\n\"client_secret\": \"XXXXXXXXXXXXXXXXXX\",\n\"redirect_uris\": [\"https://opencti.mydomain.com/auth/oic/callback\"],\n\"logout_remote\": false\n}\n}\n
Here is an example of OpenID configuration using environment variables:
- PROVIDERS__OPENID__STRATEGY=OpenIDConnectStrategy - \"PROVIDERS__OPENID__CONFIG__LABEL=Login with OpenID\"\n- PROVIDERS__OPENID__CONFIG__ISSUER=https://auth.mydomain.com/auth/realms/xxxx\n- PROVIDERS__OPENID__CONFIG__CLIENT_ID=XXXXXXXXXXXXXXXXXX\n- PROVIDERS__OPENID__CONFIG__CLIENT_SECRET=XXXXXXXXXXXXXXXXXX\n- \"PROVIDERS__OPENID__CONFIG__REDIRECT_URIS=[\\\"https://opencti.mydomain.com/auth/oic/callback\\\"]\"\n- PROVIDERS__OPENID__CONFIG__LOGOUT_REMOTE=false\n
OpenCTI support mapping OpenID Roles/Groups on OpenCTI Groups (everything is tied to a group in the platform). Here is an example:
\"oic\": {\n\"config\": {\n...,\n// Groups mapping\n\"groups_management\": { // To map OpenID Groups to OpenCTI Groups\n\"groups_scope\": \"groups\",\n\"groups_path\": [\"groups\", \"realm_access.groups\", \"resource_access.account.groups\"],\n\"groups_mapping\": [\"OpenID_Group_1:OpenCTI_Group_1\", \"OpenID_Group_2:OpenCTI_Group_2\", ...]\n},\n\"groups_management\": { // To map OpenID Roles to OpenCTI Groups\n\"groups_scope\": \"roles\",\n\"groups_path\": [\"roles\", \"realm_access.roles\", \"resource_access.account.roles\"],\n\"groups_mapping\": [\"OpenID_Role_1:OpenCTI_Group_1\", \"OpenID_Role_2:OpenCTI_Group_2\", ...]\n},\n// Organizations mapping \n\"organizations_management\": { // To map OpenID Groups to OpenCTI Organizations\n\"organizations_scope\": \"groups\",\n\"organizations_path\": [\"groups\", \"realm_access.groups\", \"resource_access.account.groups\"],\n\"organizations_mapping\": [\"OpenID_Group_1:OpenCTI_Group_1\", \"OpenID_Group_2:OpenCTI_Group_2\", ...]\n},\n\"organizations_management\": { // To map OpenID Roles to OpenCTI Organizations\n\"organizations_scope\": \"roles\",\n\"organizations_path\": [\"roles\", \"realm_access.roles\", \"resource_access.account.roles\"],\n\"organizations_mapping\": [\"OpenID_Role_1:OpenCTI_Group_1\", \"OpenID_Role_2:OpenCTI_Group_2\", ...]\n},\n}\n}\n
Here is an example of OpenID Groups mapping configuration using environment variables:
- \"PROVIDERS__OPENID__CONFIG__GROUPS_MANAGEMENT__GROUPS_SCOPE=groups\"\n- \"PROVIDERS__OPENID__CONFIG__GROUPS_MANAGEMENT__GROUPS_PATH=[\\\"groups\\\", \\\"realm_access.groups\\\", \\\"resource_access.account.groups\\\"]\"\n- \"PROVIDERS__OPENID__CONFIG__GROUPS_MANAGEMENT__GROUPS_MAPPING=[\\\"OpenID_Group_1:OpenCTI_Group_1\\\", \\\"OpenID_Group_2:OpenCTI_Group_2\\\", ...]\"\n
"},{"location":"deployment/authentication/#facebook-button","title":"Facebook (button)","text":"This strategy can authenticate your users with Facebook and is based on Passport - Facebook
\"facebook\": {\n\"identifier\": \"facebook\",\n\"strategy\": \"FacebookStrategy\",\n\"config\": {\n\"client_id\": \"XXXXXXXXXXXXXXXXXX\",\n\"client_secret\": \"XXXXXXXXXXXXXXXXXX\",\n\"callback_url\": \"https://opencti.mydomain.com/auth/facebook/callback\",\n\"logout_remote\": false\n}\n}\n
"},{"location":"deployment/authentication/#google-button","title":"Google (button)","text":"This strategy can authenticate your users with Google and is based on Passport - Google
\"google\": {\n\"identifier\": \"google\",\n\"strategy\": \"GoogleStrategy\",\n\"config\": {\n\"client_id\": \"XXXXXXXXXXXXXXXXXX\",\n\"client_secret\": \"XXXXXXXXXXXXXXXXXX\",\n\"callback_url\": \"https://opencti.mydomain.com/auth/google/callback\",\n\"logout_remote\": false\n}\n}\n
"},{"location":"deployment/authentication/#github-button","title":"GitHub (button)","text":"This strategy can authenticate your users with GitHub and is based on Passport - GitHub
\"github\": {\n\"identifier\": \"github\",\n\"strategy\": \"GithubStrategy\",\n\"config\": {\n\"client_id\": \"XXXXXXXXXXXXXXXXXX\",\n\"client_secret\": \"XXXXXXXXXXXXXXXXXX\",\n\"callback_url\": \"https://opencti.mydomain.com/auth/github/callback\",\n\"logout_remote\": false\n}\n}\n
"},{"location":"deployment/authentication/#client-certificate-button","title":"Client certificate (button)","text":"This strategy can authenticate a user based on SSL client certificates. For this you need to configure your OCTI to start in HTTPS, for example:
\"port\": 443,\n\"https_cert\": {\n\"key\": \"/cert/server_key.pem\",\n\"crt\": \"/cert/server_cert.pem\",\n\"reject_unauthorized\": true\n}\n
And then add the ClientCertStrategy
:
\"cert\": {\n\"strategy\":\"ClientCertStrategy\",\n\"config\": {\n\"label\":\"CLIENT CERT\"\n}\n}\n
Then when accessing for the first time OCTI, the browser will ask for the certificate you want to use.
"},{"location":"deployment/authentication/#automatically-create-group-on-sso","title":"Automatically create group on SSO","text":"The variable auto_create_group can be added in the options of some strategies (LDAP, SAML and OpenID). If this variable is true, the groups of a user that logins will automatically be created if they don\u2019t exist.
More precisely, if the user that tries to authenticate has groups that don\u2019t exist in OpenCTI but exist in the SSO configuration, there are two cases:
We assum that Group1 exists in the platform, and newGroup doesn\u2019t exist. The user that tries to log in has the group newGroup. If auto_create_group = true in the SSO configuration, the group named newGroup will be created at the platform initialization and the user will be mapped on it. If auto_create_group = false or is undefined, the user can\u2019t login and an error is raised.
\"groups_management\": {\n\"group_attribute\": \"cn\",\n\"groups_mapping\": [\"SSO_GROUP_NAME1:group1\", \"SSO_GROUP_NAME_2:newGroup\", ...]\n},\n\"auto_create_group\": true\n
"},{"location":"deployment/authentication/#examples","title":"Examples","text":""},{"location":"deployment/authentication/#ldap-then-fallback-to-local","title":"LDAP then fallback to local","text":"In this example the users have a login form and need to enter login and password. The authentication is done on LDAP first, then locally if user failed to authenticate and finally fail if none of them succeded. Here is an example for the production.json
file:
\"providers\": {\n\"ldap\": {\n\"strategy\": \"LdapStrategy\",\n\"config\": {\n\"url\": \"ldaps://mydc.mydomain.com:636\",\n\"bind_dn\": \"cn=Administrator,cn=Users,dc=mydomain,dc=com\",\n\"bind_credentials\": \"MY_STRONG_PASSWORD\",\n\"search_base\": \"cn=Users,dc=mydomain,dc=com\",\n\"search_filter\": \"(cn={{username}})\",\n\"mail_attribute\": \"mail\",\n\"account_attribute\": \"givenName\"\n}\n},\n\"local\": {\n\"strategy\": \"LocalStrategy\",\n\"config\": {\n\"disabled\": false\n}\n}\n}\n
If you use a container deployment, here is an example using environment variables:
- PROVIDERS__LDAP__STRATEGY=LdapStrategy\n- PROVIDERS__LDAP__CONFIG__URL=ldaps://mydc.mydomain.org:636\n- PROVIDERS__LDAP__CONFIG__BIND_DN=cn=Administrator,cn=Users,dc=mydomain,dc=com\n- PROVIDERS__LDAP__CONFIG__BIND_CREDENTIALS=XXXXXXXXXX\n- PROVIDERS__LDAP__CONFIG__SEARCH_BASE=cn=Users,dc=mydomain,dc=com\n- PROVIDERS__LDAP__CONFIG__SEARCH_FILTER=(cn={{username}})\n- PROVIDERS__LDAP__CONFIG__MAIL_ATTRIBUTE=mail\n- PROVIDERS__LDAP__CONFIG__ACCOUNT_ATTRIBUTE=givenName\n- PROVIDERS__LDAP__CONFIG__ALLOW_SELF_SIGNED=true\n- PROVIDERS__LOCAL__STRATEGY=LocalStrategy\n
"},{"location":"deployment/clustering/","title":"Clustering","text":""},{"location":"deployment/clustering/#introduction","title":"Introduction","text":"The OpenCTI platform technological stack has been designed to be able to scale horizontally. All dependencies such as Elastic or Redis can be deployed in cluster mode and performances can be drastically increased by deploying multiple platform and worker instances.
"},{"location":"deployment/clustering/#high-level-architecture","title":"High level architecture","text":"Here is the high level architecture for customers and Filigran cloud platform to ensure both high availability and throughput.
"},{"location":"deployment/clustering/#configuration","title":"Configuration","text":""},{"location":"deployment/clustering/#dependencies","title":"Dependencies","text":""},{"location":"deployment/clustering/#elasticsearch","title":"ElasticSearch","text":"In the ElasticSearch configuration of OpenCTI, it is possible to declare all nodes.
- \"ELASTICSEARCH__URL=[\\\"https://user:pass@node1:9200\\\", \\\"https://user:pass@node2:9200\\\", ...]\"\n
Compatibility
OpenCTI is also compatible with OpenSearch and AWS / GCP / Azure native search services based on the ElasticSearch query language.
"},{"location":"deployment/clustering/#redis","title":"Redis","text":"Redis should be turned to cluster mode:
- REDIS__MODE=cluster\n- \"REDIS__HOSTNAMES=[\\\"node1:6379\\\", \\\"node2:6379\\\", ...]\"\n
Compatibility
OpenCTI is also compatible with ElastiCache, MemoryStore and AWS / GCP / Azure native services based on the Redis protocol.
"},{"location":"deployment/clustering/#rabbitmq","title":"RabbitMQ","text":"For the RabbitMQ cluster, you will need a TCP load balancer on top of the nodes since the configuration does not support multi-nodes for now:
- RABBITMQ__HOSTNAME=load-balancer-rabbitmq\n
Compatibility
OpenCTI is also compatible with Amazon MQ, CloudAMQP and AWS / GCP / Azure native services based on the AMQP protocol.
"},{"location":"deployment/clustering/#s3-bucket-minio","title":"S3 bucket / MinIO","text":"MinIO is an open source server able to serve S3 buckets. It can be deployed in cluster mode and is compatible with several storage backend. OpenCTI is compatible with any tool following the S3 standard.
"},{"location":"deployment/clustering/#platform","title":"Platform","text":"As showed on the schema, best practices for cluster mode and to avoid any congestion in the technological stack are:
When enabling clustering, the number of nodes is displayed in Settings > Parameters.
"},{"location":"deployment/clustering/#managers-and-schedulers","title":"Managers and schedulers","text":"Also, since some managers like the rule engine, the task manager and the notification manager can take some resources in the OpenCTI NodeJS process, it is highly recommended to disable them in the frontend cluster. OpenCTI automatically handle the distribution and the launching of the engines across all nodes in the cluster except where they are explicitely disabled in the configuration.
"},{"location":"deployment/configuration/","title":"Configuration","text":"The purpose of this section is to learn how to configure OpenCTI to have it tailored for your production and development needs.
Here are the configuration keys, for both containers (environment variables) and manual deployment.
Parameters equivalence
The equivalent of a config variable in environment variables is the usage of a double underscores (__
) for a level of config.
For example:
\"providers\": {\n\"ldap\": {\n\"strategy\": \"LdapStrategy\"\n}\n}\n
will become:
PROVIDERS__LDAP__STRATEGY=LdapStrategy\n
If you need to put a list of elements for the key, it must have a special formatting. Here is an example for redirect URIs for OpenID config:
\"PROVIDERS__OPENID__CONFIG__REDIRECT_URIS=[\\\"https://demo.opencti.io/auth/oic/callback\\\"]\"\n
"},{"location":"deployment/configuration/#platform","title":"Platform","text":""},{"location":"deployment/configuration/#api-frontend","title":"API & Frontend","text":""},{"location":"deployment/configuration/#basic-parameters","title":"Basic parameters","text":"Parameter Environment variable Default value Description app:port APP__PORT 4000 Listen port of the application app:base_path APP__BASE_PATH Specific URI (ie. /opencti) app:base_url APP__BASE_URL http://localhost:4000 Full URL of the platform (should include the base_path
if any) app:request_timeout APP__REQUEST_TIMEOUT 1200000 Request timeout, in ms (default 20 minutes) app:session_timeout APP__SESSION_TIMEOUT 0 Session timeout, in ms (default 0 minute - disabled) app:session_idle_timeout APP__SESSION_IDLE_TIMEOUT 1200000 Idle timeout, in ms (default 20 minutes) app:session_cookie APP__SESSION_COOKIE false Use memory/session cookie instead of persistent one app:admin:email APP__ADMIN__EMAIL admin@opencti.io Default login email of the admin user app:admin:password APP__ADMIN__PASSWORD ChangeMe Default password of the admin user app:admin:token APP__ADMIN__TOKEN ChangeMe Default token (must be a valid UUIDv4)"},{"location":"deployment/configuration/#network-and-security","title":"Network and security","text":"Parameter Environment variable Default value Description http_proxy HTTP_PROXY Proxy URL for HTTP connection (example: http://proxy:8O080) https_proxy HTTPS_PROXY Proxy URL for HTTPS connection (example: http://proxy:8O080) no_proxy NO_PROXY Comma separated list of hostnames for proxy exception (example: localhost,127.0.0.0/8,internal.opencti.io) app:https_cert:cookie_secure APP__HTTPS_CERT__COOKIE_SECURE false Set the flag \"secure\" for session cookies. app:https_cert:ca APP__HTTPS_CERT__CA Empty list [] Certificate authority paths or content, only if the client uses a self-signed certificate. app:https_cert:key APP__HTTPS_CERT__KEY Certificate key path or content app:https_cert:crt APP__HTTPS_CERT__CRT Certificate crt path or content app:https_cert:reject_unauthorized APP__HTTPS_CERT__REJECT_UNAUTHORIZED If not false, the server certificate is verified against the list of supplied CAs"},{"location":"deployment/configuration/#logging","title":"Logging","text":""},{"location":"deployment/configuration/#errors","title":"Errors","text":"Parameter Environment variable Default value Description app:app_logs:logs_level APP__APP_LOGS__LOGS_LEVEL info The application log level app:app_logs:logs_files APP__APP_LOGS__LOGS_FILES true
If application logs is logged into files app:app_logs:logs_console APP__APP_LOGS__LOGS_CONSOLE true
If application logs is logged to console (useful for containers) app:app_logs:logs_max_files APP__APP_LOGS__LOGS_MAX_FILES 7 Maximum number of daily files in logs app:app_logs:logs_directory APP__APP_LOGS__LOGS_DIRECTORY ./logs File logs directory"},{"location":"deployment/configuration/#audit","title":"Audit","text":"Parameter Environment variable Default value Description app:audit_logs:logs_files APP__AUDIT_LOGS__LOGS_FILES true
If audit logs is logged into files app:audit_logs:logs_console APP__AUDIT_LOGS__LOGS_CONSOLE true
If audit logs is logged to console (useful for containers) app:audit_logs:logs_max_files APP__AUDIT_LOGS__LOGS_MAX_FILES 7 Maximum number of daily files in logs app:audit_logs:logs_directory APP__AUDIT_LOGS__LOGS_DIRECTORY ./logs Audit logs directory"},{"location":"deployment/configuration/#maps-references","title":"Maps & references","text":"Parameter Environment variable Default value Description app:map_tile_server_dark APP__MAP_TILE_SERVER_DARK https://map.opencti.io/styles/luatix-dark/{z}/{x}/{y}.png The address of the OpenStreetMap provider with dark theme style app:map_tile_server_light APP__MAP_TILE_SERVER_LIGHT https://map.opencti.io/styles/luatix-light/{z}/{x}/{y}.png The address of the OpenStreetMap provider with light theme style app:reference_attachment APP__REFERENCE_ATTACHMENT false
External reference mandatory attachment"},{"location":"deployment/configuration/#technical-customization","title":"Technical customization","text":"Parameter Environment variable Default value Description app:graphql:playground:enabled APP__GRAPHQL__PLAYGROUND__ENABLED true
Enable the playground on /graphql app:graphql:playground:force_disabled_introspection APP__GRAPHQL_PLAYGROUND__FORCE_DISABLED_INTROSPECTION false
Introspection is allowed to auth users but can be disabled in needed app:concurrency:retry_count APP__CONCURRENCY__RETRY_COUNT 200 Number of try to get the lock to work an element (create/update/merge, ...) app:concurrency:retry_delay APP__CONCURRENCY__RETRY_DELAY 100 Delay between 2 lock retry (in milliseconds) app:concurrency:retry_jitter APP__CONCURRENCY__RETRY_JITTER 50 Random jitter to prevent concurrent retry (in milliseconds) app:concurrency:max_ttl APP__CONCURRENCY__MAX_TTL 30000 Global maximum time for lock retry (in milliseconds)"},{"location":"deployment/configuration/#dependencies","title":"Dependencies","text":""},{"location":"deployment/configuration/#elasticsearch","title":"ElasticSearch","text":"Parameter Environment variable Default value Description elasticsearch:engine_selector ELASTICSEARCH__ENGINE_SELECTOR auto elk
or opensearch
, default is auto
, please put elk
if you use token auth. elasticsearch:url ELASTICSEARCH__URL http://localhost:9200 URL(s) of the ElasticSearch (supports http://user:pass@localhost:9200 and list of URLs) elasticsearch:username ELASTICSEARCH__USERNAME Username can be put in the URL or with this parameter elasticsearch:password ELASTICSEARCH__PASSWORD Password can be put in the URL or with this parameter elasticsearch:api_key ELASTICSEARCH__API_KEY API key for ElasticSearch token auth. Please set also engine_selector
to elk
elasticsearch:index_prefix ELASTICSEARCH__INDEX_PREFIX opencti Prefix for the indices elasticsearch:ssl:reject_unauthorized ELASTICSEARCH__SSL__REJECT_UNAUTHORIZED true
Enable TLS certificate check elasticsearch:ssl:ca ELASTICSEARCH__SSL__CA Custom certificate path or content elasticsearch:ssl:ca_plain (depecated) ELASTICSEARCH__SSL__CA_PLAIN @depecated, use ca directly"},{"location":"deployment/configuration/#redis","title":"Redis","text":"Parameter Environment variable Default value Description redis:mode REDIS__MODE single Connect to redis \"single\" or \"cluster\" redis:namespace REDIS__NAMESPACE Namespace (to use as prefix) redis:hostname REDIS__HOSTNAME localhost Hostname of the Redis Server redis:hostnames REDIS__HOSTNAMES Hostnames definition for Redis cluster mode: a list of host/port objects. redis:port REDIS__PORT 6379 Port of the Redis Server redis:use_ssl REDIS__USE_SSL false
Is the Redis Server has TLS enabled redis:username REDIS__USERNAME Username of the Redis Server redis:password REDIS__PASSWORD Password of the Redis Server redis:ca REDIS__CA [} List of path(s) of the CA certificate(s) redis:trimming REDIS__TRIMMING 2000000 Number of elements to maintain in the stream. (0 = unlimited)"},{"location":"deployment/configuration/#rabbitmq","title":"RabbitMQ","text":"Parameter Environment variable Default value Description rabbitmq:hostname RABBITMQ__HOSTNAME localhost Hostname of the RabbitMQ server rabbitmq:port RABBITMQ__PORT 5672 Port of the RabbitMQ server rabbitmq:port_management RABBITMQ__PORT_MANAGEMENT 15672 Port of the RabbitMQ Management Plugin rabbitmq:username RABBITMQ__USERNAME guest RabbitMQ user rabbitmq:password RABBITMQ__PASSWORD guest RabbitMQ password rabbitmq:queue_type RABBITMQ__QUEUE_TYPE \"classic\" RabbitMQ Queue Type (\"classic\" or \"quorum\") - - - - rabbitmq:use_ssl RABBITMQ__USE_SSL false
Use TLS connection rabbitmq:use_ssl_cert RABBITMQ__USE_SSL_CERT Path or cert content rabbitmq:use_ssl_key RABBITMQ__USE_SSL_KEY Path or key content rabbitmq:use_ssl_pfx RABBITMQ__USE_SSL_PFX Path or pfx content rabbitmq:use_ssl_ca RABBITMQ__USE_SSL_CA Path or cacert content rabbitmq:use_ssl_passphrase RABBITMQ__SSL_PASSPHRASE Passphrase for the key certificate rabbitmq:use_ssl_reject_unauthorized RABBITMQ__SSL_REJECT_UNAUTHORIZED false
Reject rabbit self signed certificate - - - - rabbitmq:management_ssl RABBITMQ__MANAGEMENT_SSL false
Is the Management Plugin has TLS enabled rabbitmq:management_ssl_reject_unauthorized RABBITMQ__SSL_REJECT_UNAUTHORIZED true
Reject management self signed certificate"},{"location":"deployment/configuration/#s3-bucket","title":"S3 Bucket","text":"Parameter Environment variable Default value Description minio:endpoint MINIO__ENDPOINT localhost Hostname of the S3 Service minio:port MINIO__PORT 9000 Port of the S3 Service minio:use_ssl MINIO__USE_SSL false
Is the S3 Service has TLS enabled minio:access_key MINIO__ACCESS_KEY ChangeMe The S3 Service access key minio:secret_key MINIO__SECRET_KEY ChangeMe The S3 Service secret key minio:bucket_name MINIO__BUCKET_NAME opencti-bucket The S3 bucket name (useful to change if you use AWS) minio:bucket_region MINIO__BUCKET_REGION us-east-1 The S3 bucket region if you use AWS minio:use_aws_role MINIO__USE_AWS_ROLE false
To use AWS role auto credentials"},{"location":"deployment/configuration/#smtp-service","title":"SMTP Service","text":"Parameter Environment variable Default value Description smtp:hostname SMTP__HOSTNAME SMTP Server hostname smtp:port SMTP__PORT 9000 SMTP Port (25 or 465 for TLS) smtp:use_ssl SMTP__USE_SSL false
SMTP over TLS smtp:reject_unauthorized SMTP__REJECT_UNAUTHORIZED false
Enable TLS certificate check smtp:username SMTP__USERNAME SMTP Username if authentication is needed smtp:password SMTP__PASSWORD SMTP Password if authentication is needed"},{"location":"deployment/configuration/#schedules-engines","title":"Schedules & Engines","text":"Parameter Environment variable Default value Description rule_engine:enabled RULE_ENGINE__ENABLED true
Enable/disable the rule engine rule_engine:lock_key RULE_ENGINE__LOCK_KEY rule_engine_lock Lock key of the engine in Redis - - - - history_manager:enabled HISTORY_MANAGER__ENABLED true
Enable/disable the history manager history_manager:lock_key HISTORY_MANAGER__LOCK_KEY history_manager_lock Lock key for the manager in Redis - - - - task_scheduler:enabled TASK_SCHEDULER__ENABLED true
Enable/disable the task scheduler task_scheduler:lock_key TASK_SCHEDULER__LOCK_KEY task_manager_lock Lock key for the scheduler in Redis task_scheduler:interval TASK_SCHEDULER__INTERVAL 10000 Interval to check new task to do (in ms) - - - - sync_manager:enabled SYNC_MANAGER__ENABLED true
Enable/disable the sync manager sync_manager:lock_key SYNC_MANAGER__LOCK_KEY sync_manager_lock Lock key for the manager in Redis sync_manager:interval SYNC_MANAGER__INTERVAL 10000 Interval to check new sync feeds to consume (in ms) - - - - expiration_scheduler:enabled EXPIRATION_SCHEDULER__ENABLED true
Enable/disable the scheduler expiration_scheduler:lock_key EXPIRATION_SCHEDULER__LOCK_KEY expired_manager_lock Lock key for the scheduler in Redis expiration_scheduler:interval EXPIRATION_SCHEDULER__INTERVAL 300000 Interval to check expired indicators (in ms) - - - - retention_manager:enabled RETENTION_MANAGER__ENABLED true
Enable/disable the retention manager retention_manager:lock_key RETENTION_MANAGER__LOCK_KEY retention_manager_lock Lock key for the manager in Redis retention_manager:interval RETENTION_MANAGER__INTERVAL 60000 Interval to check items to be deleted (in ms) - - - - notification_manager:enabled NOTIFICATION_MANAGER__ENABLED true
Enable/disable the notification manager notification_manager:lock_key NOTIFICATION_MANAGER__LOCK_KEY notification_manager_lock Lock key for the manager in Redis notification_manager:interval NOTIFICATION_MANAGER__INTERVAL 10000 Interval to push notifications - - - - publisher_manager:enabled PUBLISHER_MANAGER__ENABLED true
Enable/disable the publisher manager publisher_manager:lock_key PUBLISHER_MANAGER__LOCK_KEY publisher_manager_lock Lock key for the manager in Redis publisher_manager:interval PUBLISHER_MANAGER__INTERVAL 10000 Interval to send notifications / digests (in ms) - - - - ingestion_manager:enabled INGESTION_MANAGER__ENABLED true
Enable/disable the ingestion manager ingestion_manager:lock_key INGESTION_MANAGER__LOCK_KEY ingestion_manager_lock Lock key for the manager in Redis ingestion_manager:interval INGESTION_MANAGER__INTERVAL 300000 Interval to check for new data in remote feeds - - - - playbook_manager:enabled PLAYBOOK_MANAGER__ENABLED true
Enable/disable the playbook manager playbook_manager:lock_key PLAYBOOK_MANAGER__LOCK_KEY publisher_manager_lock Lock key for the manager in Redis playbook_manager:interval PLAYBOOK_MANAGER__INTERVAL 60000 Interval to check new playbooks Default file
It is possible to check all default parameters implemented in the platform in the default.json
file.
Can be configured manually using the configuration file config.yml
or through environment variables.
false
Reject rabbit self signed certificate"},{"location":"deployment/configuration/#worker-specific-configuration","title":"Worker specific configuration","text":"Parameter Environment variable Default value Description worker:log_level WORKER_LOG_LEVEL info The log level (error, warning, info or debug)"},{"location":"deployment/configuration/#connector-specific-configuration","title":"Connector specific configuration","text":"For specific connector configuration, you need to check each connector behavior.
"},{"location":"deployment/configuration/#elasticsearch_1","title":"ElasticSearch","text":"If you want to adapt the memory consumption of ElasticSearch, you can use theses options:
# Add the following environment variable:\n\"ES_JAVA_OPTS=-Xms8g -Xmx8g\"\n
This can be done in configuration file in the jvm.conf
file.
Connectors list
You are looking for the available connectors? The list is in the OpenCTI Ecosystem.
Connectors are the cornerstone of the OpenCTI platform and allow organizations to easily ingest, enrich or export data in the platform. According to their functionality and use case, they are categorized in following classes.
"},{"location":"deployment/connectors/#import","title":"Import","text":"These connectors automatically retrieve information from an external organization, application or service, convert it to STIX 2.1 bundles and import it into OpenCTI using the workers.
"},{"location":"deployment/connectors/#enrichment","title":"Enrichment","text":"When a new object is created in the platform or on the user request, it is possible to trigger the internal enrichment connector to lookup and/or search the object in external organizations, applications or services. If the object is found, the connectors will generate a STIX 2.1 bundle which will increase the level of knowledge about the concerned object.
"},{"location":"deployment/connectors/#stream","title":"Stream","text":"These connectors connect to a platform data stream and continously do something with the received events. In most cases, they are used to consume OpenCTI data and insert them in third-party platforms such as SIEMs, XDRs, EDRS, etc. In some cases, stream connectors can also query the external system on a regular basis and act as import connector for instance to gather alerts and sightings related to CTI data and push them to OpenCTI (bi-directional).
"},{"location":"deployment/connectors/#import-files","title":"Import files","text":"Information from an uploaded file can be extracted and ingested into OpenCTI. Examples are files attached to a report or a STIX 2.1 file.
"},{"location":"deployment/connectors/#export-files","title":"Export files","text":"Information stored in OpenCTI can be extracted into different file formats like .csv or .json (STIX 2).
"},{"location":"deployment/connectors/#connector-configuration","title":"Connector configuration","text":"All connectors have to be able to access to the OpenCTI API. To allow this connection, they have 2 mandatory configuration parameters, the OPENCTI_URL
and the OPENCTI_TOKEN
. In addition of these 2 parameters, connectors have other mandatory parameters that need to be set in order to get them work.
Connectors tokens
Be careful, we strongly recommend to use a dedicated token for each connector running in the platform. So you have to create a specific user for each of them.
Also, if all connectors users can run in with a user belonging to the Connectors
group (with the Connector
role), the Internal Export Files
should be run with a user who is Administrator (with bypass capability) because they imperstonate the user requesting the export to avoid data leak.
Here is an example of a connector docker-compose.yml
file:
- CONNECTOR_ID=ChangeMe\n- CONNECTOR_TYPE=EXTERNAL_IMPORT\n- CONNECTOR_NAME=MITRE ATT&CK\n- CONNECTOR_SCOPE=identity,attack-pattern,course-of-action,intrusion-set,malware,tool,report\n- CONNECTOR_CONFIDENCE_LEVEL=3\n- CONNECTOR_UPDATE_EXISTING_DATA=true\n- CONNECTOR_LOG_LEVEL=info\n
Here is an example in a connector config.yml
file:
-connector:\nid: 'ChangeMe'\ntype: 'EXTERNAL_IMPORT'\nname: 'MITRE ATT&CK'\nscope: 'identity,attack-pattern,course-of-action,intrusion-set,malware,tool,report'\nconfidence_level: 3\nupdate_existing_data: true\nlog_level: 'info'\n
"},{"location":"deployment/connectors/#networking","title":"Networking","text":"Be aware that all connectors are reaching RabbitMQ based the RabbitMQ configuration provided by the OpenCTI platform. The connector must be able to reach RabbitMQ on the specified hostname and port. If you have a specific Docker network configuration, please be sure to adapt your docker-compose.yml
file in such way that the connector container gets attached to the OpenCTI Network, e.g.:
networks:\ndefault:\nexternal: true\nname: opencti-docker_default\n
"},{"location":"deployment/connectors/#connector-token","title":"Connector token","text":""},{"location":"deployment/connectors/#create-the-user","title":"Create the user","text":"As mentioned previously, it is strongly recommended to run each connector with its own user. The Internal Export File
connectors should be launched with a user that belongs to a group which has an \u201cAdministrator\u201d role (with bypass all capabilities enabled).
By default, in platform, a group named \"Connectors\" already exists. So just create a new user with the name [C] Name of the connector
in Settings > Security > Users.
Just go to the user you have just created and add it to the Connectors
group.
Then just get the token of the user displayed in the interface.
"},{"location":"deployment/connectors/#docker-activation","title":"Docker activation","text":"You can either directly run the Docker image of connectors or add them to your current docker-compose.yml
file.
For instance, to enable the MISP connector, you can add a new service to your docker-compose.yml
file:
connector-misp:\n image: opencti/connector-misp:latest\n environment:\n - OPENCTI_URL=http://localhost\n - OPENCTI_TOKEN=ChangeMe\n - CONNECTOR_ID=ChangeMe\n - CONNECTOR_TYPE=EXTERNAL_IMPORT\n - CONNECTOR_NAME=MISP\n - CONNECTOR_SCOPE=misp\n - CONNECTOR_CONFIDENCE_LEVEL=3\n- CONNECTOR_UPDATE_EXISTING_DATA=false\n - CONNECTOR_LOG_LEVEL=info\n - MISP_URL=http://localhost # Required\n - MISP_KEY=ChangeMe # Required\n - MISP_SSL_VERIFY=False # Required\n - MISP_CREATE_REPORTS=True # Required, create report for MISP event\n - MISP_REPORT_CLASS=MISP event # Optional, report_class if creating report for event\n - MISP_IMPORT_FROM_DATE=2000-01-01 # Optional, import all event from this date\n - MISP_IMPORT_TAGS=opencti:import,type:osint # Optional, list of tags used for import events\n - MISP_INTERVAL=1 # Required, in minutes\n restart: always\n
"},{"location":"deployment/connectors/#launch-a-standalone-connector","title":"Launch a standalone connector","text":"To launch standalone connector, you can use the docker-compose.yml
file of the connector itself. Just download the latest release and start the connector:
$ wget https://github.com/OpenCTI-Platform/connectors/archive/{RELEASE_VERSION}.zip\n$ unzip {RELEASE_VERSION}.zip\n$ cd connectors-{RELEASE_VERSION}/misp/\n
Change the configuration in the docker-compose.yml
according to the parameters of the platform and of the targeted service. Then launch the connector:
$ docker-compose up\n
"},{"location":"deployment/connectors/#manual-activation","title":"Manual activation","text":"If you want to manually launch connector, you just have to install Python 3 and pip3 for dependencies:
$ apt install python3 python3-pip\n
Download the release of the connectors:
$ wget <https://github.com/OpenCTI-Platform/connectors/archive/{RELEASE_VERSION}.zip>\n$ unzip {RELEASE_VERSION}.zip\n$ cd connectors-{RELEASE_VERSION}/misp/src/\n
Install dependencies and initialize the configuration:
$ pip3 install -r requirements.txt\n$ cp config.yml.sample config.yml\n
Change the config.yml
content according to the parameters of the platform and of the targeted service and launch the connector:
$ python3 misp.py\n
"},{"location":"deployment/connectors/#connectors-status","title":"Connectors status","text":"The connector status can be displayed in the dedicated section of the platform available in Data > Connectors. You will be able to see the statistics of the RabbitMQ queue of the connector:
Problem
If you encounter problems deploying OpenCTI or connectors, you can consult the troubleshooting page page.
"},{"location":"deployment/installation/","title":"Installation","text":"All components of OpenCTI are shipped both as Docker images and manual installation packages.
Production deployment
For production deployment, we recommend to deploy all components in containers, including dependencies, using native cloud services or orchestration systems such as Kubernetes.
To have more details about deploying OpenCTI and its dependencies in cluster mode, please read the dedicated section.
Use Docker
Deploy OpenCTI using Docker and the default docker-compose.yml
provided in the docker.
Setup
Manual installation
Deploy dependencies and launch the platform manually using the packages released in the GitHub releases.
Explore
OpenCTI can be deployed using the docker-compose command.
"},{"location":"deployment/installation/#pre-requisites","title":"Pre-requisites","text":"Linux
$ sudo apt install docker-compose\n
Windows and MacOS
Just download the appropriate Docker for Desktop version for your operating system.
"},{"location":"deployment/installation/#clone-the-repository","title":"Clone the repository","text":"Docker helpers are available in the Docker GitHub repository.
$ mkdir -p /path/to/your/app && cd /path/to/your/app\n$ git clone https://github.com/OpenCTI-Platform/docker.git\n$ cd docker\n
"},{"location":"deployment/installation/#configure-the-environment","title":"Configure the environment","text":"Before running the docker-compose
command, the docker-compose.yml
file should be configured. By default, the docker-compose.yml
file is using environment variables available in the file .env.sample
.
You can either rename the file .env.sample
in .env
and put the expected values or just fill directly the docker-compose.yml
with the values corresponding to your environment.
Configuration static parameters
The complete list of available static parameters is available in the configuration section.
Here is an example to quickly generate the .env
file under Linux, especially all the default UUIDv4:
$ sudo apt install -y jq\n$ cd ~/docker\n$ (cat << EOF\nOPENCTI_ADMIN_EMAIL=admin@opencti.io\nOPENCTI_ADMIN_PASSWORD=ChangeMePlease\nOPENCTI_ADMIN_TOKEN=$(cat /proc/sys/kernel/random/uuid)\nMINIO_ROOT_USER=$(cat /proc/sys/kernel/random/uuid)\nMINIO_ROOT_PASSWORD=$(cat /proc/sys/kernel/random/uuid)\nRABBITMQ_DEFAULT_USER=guest\nRABBITMQ_DEFAULT_PASS=guest\nELASTIC_MEMORY_SIZE=4G\nCONNECTOR_HISTORY_ID=$(cat /proc/sys/kernel/random/uuid)\nCONNECTOR_EXPORT_FILE_STIX_ID=$(cat /proc/sys/kernel/random/uuid)\nCONNECTOR_EXPORT_FILE_CSV_ID=$(cat /proc/sys/kernel/random/uuid)\nCONNECTOR_IMPORT_FILE_STIX_ID=$(cat /proc/sys/kernel/random/uuid)\nCONNECTOR_IMPORT_REPORT_ID=$(cat /proc/sys/kernel/random/uuid)\nEOF\n) > .env\n
If your docker-compose
deployment does not support .env
files, just export all environment variables before launching the platform:
$ export $(cat .env | grep -v \"#\" | xargs)\n
\u00b2
As OpenCTI has a dependency on ElasticSearch, you have to set the vm.max_map_count
before running the containers, as mentioned in the ElasticSearch documentation.
$ sudo sysctl -w vm.max_map_count=1048575\n
To make this parameter persistent, add the following to the end of your /etc/sysctl.conf
:
$ vm.max_map_count=1048575\n
"},{"location":"deployment/installation/#persist-data","title":"Persist data","text":"The default for OpenCTI data is to be persistent.
In the docker-compose.yml
, you will find at the end the list of necessary persitent volumes for the dependencies:
volumes:\nesdata: # ElasticSearch data\ns3data: # S3 bucket data\nredisdata: # Redis data\namqpdata: # RabbitMQ data\n
"},{"location":"deployment/installation/#run-opencti","title":"Run OpenCTI","text":""},{"location":"deployment/installation/#using-single-node-docker","title":"Using single node Docker","text":"After changing your .env
file run docker-compose
in detached (-d) mode:
$ sudo systemctl start docker.service\n# Run docker-compose in detached \n$ docker-compose up -d\n
"},{"location":"deployment/installation/#using-docker-swarm","title":"Using Docker swarm","text":"In order to have the best experience with Docker, we recommend using the Docker stack feature. In this mode you will have the capacity to easily scale your deployment.
# If your virtual machine is not a part of a Swarm cluster, please use:\n$ docker swarm init\n
Put your environment variables in /etc/environment
:
# If you already exported your variables to .env from above:\n$ sudo cat .env >> /etc/environment\n$ sudo bash -c 'cat .env >> /etc/environment\u2019\n$ sudo docker stack deploy --compose-file docker-compose.yml opencti\n
Installation done
You can now go to http://localhost:8080 and log in with the credentials configured in your environment variables.
"},{"location":"deployment/installation/#manual-installation","title":"Manual installation","text":""},{"location":"deployment/installation/#prerequisites","title":"Prerequisites","text":""},{"location":"deployment/installation/#prepare-the-installation","title":"Prepare the installation","text":""},{"location":"deployment/installation/#installation-of-dependencies","title":"Installation of dependencies","text":"You have to install all the needed dependencies for the main application and the workers. The example below is for Debian-based systems:
$ sudo apt-get install build-essential nodejs npm python3 python3-pip python3-dev\n
"},{"location":"deployment/installation/#download-the-application-files","title":"Download the application files","text":"First, you have to download and extract the latest release file. Then select the version to install depending of your operating system:
For Linux:
opencti-release_{RELEASE_VERSION}.tar.gz
version.opencti-release-{RELEASE_VERSION}_musl.tar.gz
version.For Windows:
We don't provide any Windows release for now. However it is still possible to check the code out, manually install the dependencies and build the software.
$ mkdir /path/to/your/app && cd /path/to/your/app\n$ wget <https://github.com/OpenCTI-Platform/opencti/releases/download/{RELEASE_VERSION}/opencti-release-{RELEASE_VERSION}.tar.gz>\n$ tar xvfz opencti-release-{RELEASE_VERSION}.tar.gz\n
"},{"location":"deployment/installation/#install-the-main-platform","title":"Install the main platform","text":""},{"location":"deployment/installation/#configure-the-application","title":"Configure the application","text":"The main application has just one JSON configuration file to change and a few Python modules to install
$ cd opencti\n$ cp config/default.json config/production.json\n
Change the config/production.json file according to your configuration of ElasticSearch, Redis, RabbitMQ and S3 bucket as well as default credentials (the ADMIN_TOKEN
must be a valid UUID).
$ cd src/python\n$ pip3 install -r requirements.txt\n$ cd ../..\n
"},{"location":"deployment/installation/#start-the-application","title":"Start the application","text":"The application is just a NodeJS process, the creation of the database schema and the migration will be done at starting.
$ yarn install\n$ yarn build\n$ yarn serv\n
The default username and password are those you have put in the config/production.json
file.
The OpenCTI worker is used to write the data coming from the RabbitMQ messages broker.
"},{"location":"deployment/installation/#configure-the-worker","title":"Configure the worker","text":"$ cd worker\n$ pip3 install -r requirements.txt\n$ cp config.yml.sample config.yml\n
Change the config.yml file according to your OpenCTI token.
"},{"location":"deployment/installation/#start-as-many-workers-as-you-need","title":"Start as many workers as you need","text":"$ python3 worker.py &\n$ python3 worker.py &\n
Installation done
You can now go to http://localhost:4000 and log in with the credentials configured in your production.json
file.
Multi-clouds Terraform scripts
This repository is here to provide you with a quick and easy way to deploy an OpenCTI instance in the cloud (AWS, Azure, or GCP).
GitHub Respository
AWS Advanced Terraform scripts
A Terraform deployment of OpenCTI designed to make use of native AWS Resources (where feasible). This includes AWS ECS Fargate, AWS OpenSearch, etc.
GitHub Repository
Kubernetes Helm Charts
OpenCTI Helm Charts (may be out of date) for Kubernetes with a global configuration file.
GitHub Repository
If you want to use OpenCTI behind a reverse proxy with a context path, like https://domain.com/opencti
, please change the base_path
static parameter.
APP__BASE_PATH=/opencti
By default OpenCTI use websockets so don't forget to configure your proxy for this usage, an example with Nginx
:
location / {\nproxy_cache off;\nproxy_buffering off;\nproxy_http_version 1.1;\nproxy_set_header Upgrade $http_upgrade;\nproxy_set_header Connection \"upgrade\";\nproxy_set_header Host $host;\nchunked_transfer_encoding off;\nproxy_pass http://YOUR_UPSTREAM_BACKEND;\n}\n
"},{"location":"deployment/installation/#additional-memory-information","title":"Additional memory information","text":""},{"location":"deployment/installation/#platform","title":"Platform","text":"OpenCTI platform is based on a NodeJS runtime, with a memory limit of 8GB by default. If you encounter OutOfMemory
exceptions, this limit could be changed:
- NODE_OPTIONS=--max-old-space-size=8096\n
"},{"location":"deployment/installation/#workers-and-connectors","title":"Workers and connectors","text":"OpenCTI workers and connectors are Python processes. If you want to limit the memory of the process, we recommend to directly use Docker to do that. You can find more information in the official Docker documentation.
"},{"location":"deployment/installation/#elasticsearch","title":"ElasticSearch","text":"ElasticSearch is also a JAVA process. In order to setup the JAVA memory allocation, you can use the environment variable ES_JAVA_OPTS
. You can find more information in the official ElasticSearch documentation.
Redis has a very small footprint on keys but will consume memory for the stream. By default the size of the stream is limited to 2 millions which represents a memory footprint around 8 GB
. You can find more information in the Redis docker hub.
MinIO is a small process and does not require a high amount of memory. More information are available for Linux here on the Kernel tuning guide.
"},{"location":"deployment/installation/#rabbitmq","title":"RabbitMQ","text":"The RabbitMQ memory configuration can be find in the RabbitMQ official documentation. RabbitMQ will consumed memory until a specific threshold, therefore it should be configure along with the Docker memory limitation.
"},{"location":"deployment/integrations/","title":"Integrations","text":""},{"location":"deployment/integrations/#introduction","title":"Introduction","text":"OpenCTI supports multiple ways to integrate with other systems which do not have native connectors or plugins to the platform. Here are the technical features available to ease the connection and the integration of the platform with other applications.
Connectors list
If you are looking to the list of OpenCTI connectors or native integration, please check the OpenCTI Ecosystem.
"},{"location":"deployment/integrations/#native-feeds-and-streams","title":"Native feeds and streams","text":"To ease integrations with other products, OpenCTI has built-in capabilities to deliver the data to third-parties.
"},{"location":"deployment/integrations/#csv-feeds","title":"CSV Feeds","text":"It is possible to create as many CSV feeds as needed, based on filters and accessible in HTTP. CSV feeds are available in Data > Data sharing > Feeds (CSV).
When creating a CSV feed, you need to select one or multiple types of entity to make available. For all columns available in the CSV, you've to select which field will be used for each type of entity:
Details
For more information about CSV feeds, filters and configuration, please check the Export in structured format section.
"},{"location":"deployment/integrations/#taxii-collections","title":"TAXII collections","text":"Most of the moden cybersecurity systems such as SIEMs, EDRs, XDRs and even firewalls supports the TAXII protocol which is basically a paginated HTTP STIX feed. OpenCTI implements a TAXII 2.1 server with the ability to create as many TAXII collections as needed in Data > Data sharing > TAXII Collections?
TAXII collections are a sub-selection of the knowledge available in the platform and relie on filters. For instance, it is possible to create TAXII collections for pieces of malware with a given label, for indicators with a score greater than n, etc.
"},{"location":"deployment/integrations/#http-streams","title":"HTTP Streams","text":"After implementing CSV feeds and TAXII collections, we figured out that those 2 stateless APIs are definitely not enough when it comes to tackle advanced information sharing challenges such as:
Live streams are available in Data > Data sharing > Live streams. As TAXII collections, it is possible to create as many streams as needed using filters.
Streams implement the HTTP SSE (Server-sent events) protocol and give applications to consume a real time pure STIX 2.1 stream. Stream connectors in the OpenCTI Ecosystem are using live streams to consume data and do something such as create / update / delete information in SIEMs, XDRs, etc.
"},{"location":"deployment/integrations/#authentication","title":"Authentication","text":"For all previously explained capabilities, as they are over the HTTP protocol, 3 authentication mechanisms are available to consume them.
Using a bearer header with your OpenCTI API key
Authorization: Bearer a17bc103-8420-4208-bd53-e1f80845d15f\n
API Key
Your API key can be found in your profile available clicking on the top right icon.
Using basic authentication
Username: Your platform username\nPassword: Your plafrom password\nAuthorization: Basic c2FtdWVsLmhhc3NpbmVBZmlsaWdyYW4uaW86TG91aXNlMTMwNCM=\n
Using client certificate authentication
To know how to configure the client certificate authentication, please consult the authentication configuration section.
To allow analysts and developers to implement more custom or complex use cases, a full GraphQL API is available in the application on the /graphql
endpoint.
The API can be queried using various GraphQL client such as Postman but you can leverage any HTTP client to forge GraphQL queries using POST
methods.
The API authentication can be performed using the token of a user and a classic Authorization header:
Content-Type: application/json\nAuthorization: Bearer 6b6554c4-bb2c-4c80-9cd3-30288c8bf424\n
"},{"location":"deployment/integrations/#playground","title":"Playground","text":"The playground is available on the /graphql
endpoint. A link button is also available in the profile of your user.
All the schema documentation is directly available in the playground.
If you already logged to OpenCTI with the same browser you should be able to directly do some requests. If you are not authenticated or want to authenticate only through the playground you can use a header configuration using your profile token
Example of configuration (bottom left of the playground):
"},{"location":"deployment/integrations/#python-library","title":"Python library","text":"Since not everyone is familiar with GraphQL APIs, we've developed a Python library to ease the interaction with it. The library is pretty easy to use. To initiate the client:
# coding: utf-8\nfrom pycti import OpenCTIApiClient\n# Variables\napi_url = \"http://opencti:4000\"\napi_token = \"bfa014e0-e02e-4aa6-a42b-603b19dcf159\"\n# OpenCTI initialization\nopencti_api_client = OpenCTIApiClient(api_url, api_token)\n
Then just use the available helpers:
# Search for malware with the keyword \"windows\"\nmalwares = opencti_api_client.malware.list(search=\"windows\")\n# Print\nprint(malwares)\n
Details
For more detailed information about the Python library, please read the dedicated section.
"},{"location":"deployment/overview/","title":"Overview","text":"Before starting the installation, let's discover how OpenCTI is working, which dependencies are needed and what are the minimal requirements to deploy it in production.
"},{"location":"deployment/overview/#architecture","title":"Architecture","text":"The OpenCTI platform relies on several external databases and services in order to work.
"},{"location":"deployment/overview/#platform","title":"Platform","text":"The platform is the central part of the OpenCTI technological stack. It allows users to access to the user interface but also provides the GraphQL API used by connectors and workers to insert data. In the context of a production deployment, you may need to scale horizontally and launch multiple platforms behind a load balancer connected to the same databases (ElasticSearch, Redis, S3, RabbitMQ).
"},{"location":"deployment/overview/#workers","title":"Workers","text":"The workers are standalone Python processes consuming messages from the RabbitMQ broker in order to do asynchronous write queries. You can launch as many workers as you need to increase the write performances. At some point, the write performances will be limited by the throughput of the ElasticSearch database cluster.
Number of workers
If you need to increase performances, it is better to launch more platforms to handle worker queries. The recommended setup is to have at least one platform for 3 workers (ie. 9 workers distributed over 3 platforms).
"},{"location":"deployment/overview/#connectors","title":"Connectors","text":"The connectors are third-party pieces of software (Python processes) that can play five different roles on the platform:
Type Description Examples EXTERNAL_IMPORT Pull data from remote sources, convert it to STIX2 and insert it on the OpenCTI platform. MITRE Datasets, MISP, CVE, AlienVault, Mandiant, etc. INTERNAL_ENRICHMENT Listen for new OpenCTI entities or users requests, pull data from remote sources to enrich. Shodan, DomainTools, IpInfo, etc. INTERNAL_IMPORT_FILE Extract data from files uploaded on OpenCTI trough the UI or the API. STIX 2.1, PDF, Text, HTML, etc. INTERNAL_EXPORT_FILE Generate export from OpenCTI data, based on a single object or a list. STIX 2.1, CSV, PDF, etc. STREAM Consume a platform data stream an do something with events. Splunk, Elastic Security, Q-Radar, etc.List of connectors
You can find all currently available connector in the OpenCTI Ecosystem.
"},{"location":"deployment/overview/#infrastructure-requirements","title":"Infrastructure requirements","text":""},{"location":"deployment/overview/#dependencies","title":"Dependencies","text":"Component Version CPU RAM Disk type Disk space ElasticSearch / OpenSearch \u2265 8.0 / \u2265 2.9 2 cores \u2265 8GB SSD \u2265 16GB Redis \u2265 7.1 1 core \u2265 1GB SSD \u2265 16GB RabbitMQ \u2265 3.11 1 core \u2265 512MB Standard \u2265 2GB S3 / MinIO \u2265 RELEASE.2023-02 1 core \u2265 128MB SSD \u2265 16GB"},{"location":"deployment/overview/#platform_1","title":"Platform","text":"Component CPU RAM Disk type Disk space OpenCTI Core 2 cores \u2265 8GB None (stateless) - Worker(s) 1 core \u2265 128MB None (stateless) - Connector(s) 1 core \u2265 128MB None (stateless) -Clustering
To have more details about deploying OpenCTI and its dependencies in cluster mode, please read the dedicated section.
"},{"location":"deployment/resources/","title":"Other resources","text":""},{"location":"deployment/resources/#introduction","title":"Introduction","text":"OpenCTI is an open and modular platform. A lot of connectors, plugins and clients are created by Filigran and community. You can find here other resources available to complete your OpenCTI journey.
"},{"location":"deployment/resources/#videos-training","title":"Videos & training","text":"YouTube channel
Watch demonstration videos, use case explanations, customers and community testimonies and past webinars.
Watch
Training courses
Empower your journey with OpenCTI training courses for both analyst and administrators and get your certifcate.
Learn
Blog articles
Read posts written by both Filigran teams and community members about OpenCTI features and use cases.
Read
Newsletters
Subscribe to Filigran newsletters to get informed about the latest evolutions of our product ecosystems.
Subscribe
Verticalized threat landcapes
Access to monthly sectorial analysis from our experts team based on knowledge and data collected by our partners.
Consult
Case studies
Explore the Filigran case studies about stories and usages of the platform among our communities and customers.
Download
Default rollover policies
Since OpenCTI 5.9.0, rollover policies are automatically created when the platform is initialized for the first time. If your platform has been initialized using an older version of OpenCTI or if you would like to understand (and customize) rollover policies please read the following documentation.
"},{"location":"deployment/rollover/#introduction","title":"Introduction","text":"ElasticSearch and OpenSearch both support rollover on indices. OpenCTI has been designed to be able to use aliases for indices and so support very well index lifeycle policies. Thus, by default OpenCTI initialized indices with a suffix -00001
and use wildcard to query indices. When rollover policies are implemented (default starting OCTI 5.9.X if you initialized your platform at this version), indices are splitted to keep a reasonable volume of data in shards.
By default, a rollover policy is applied on all indices used by OpenCTI.
opencti_history
opencti_inferred_entities
opencti_inferred_relationships
opencti_internal_objects
opencti_internal_relationships
opencti_stix_core_relationships
opencti_stix_cyber_observable_relationships
opencti_stix_cyber_observables
opencti_stix_domain_objects
opencti_stix_meta_objects
opencti_stix_meta_relationships
opencti_stix_sighting_relationships
For your information, the indices which can grow rapidly are:
opencti_stix_meta_relationships
: it contains all the nested relationships between objects and labels / marking definitions / external references / authors, etc.opencti_history
: it contains the history log of all objects in the platform.opencti_stix_cyber_observables
: it contains all observables stored in the platform.opencti_stix_core_relationships
: it contains all main STIX relationships stored in the platform.Here is the recommended policy (initialized starting 5.9.X):
50 GB
365 days
75,000,000
Procedure information
Please read the following only if your platform has been initialized before 5.9.0, otherwise lifecycle policies has been created (but you can still cutomize them).
Unfortunately, to be able to implement rollover policies on ElasticSearch / OpenSearch indices, it will be needed to re-index all the data in new indices using ElasticSearch capabilities.
"},{"location":"deployment/rollover/#shutdown","title":"Shutdown","text":"First step is to shutdown your OpenCTI platform.
"},{"location":"deployment/rollover/#change-configuration","title":"Change configuration","text":"Then, in the OpenCTI configuration, change the ElasticSearch / OpenSearch default prefix to octi
(default is opencti
).
Create a rollover policy named octi-ilm-policy
(in Kibana, Management > Index Lifecycle Policies
):
50 GB
365 days
75,000,000
In Kibana, clone the opencti-index-template
to have one index template by OpenCTI index with the appropriate rollover policy, index pattern and rollover alias (in Kibana, Management > Index Management > Index Templates
).
Create the following index templates:
octi_history
octi_inferred_entities
octi_inferred_relationships
octi_internal_objects
octi_internal_relationships
octi_stix_core_relationships
octi_stix_cyber_observable_relationships
octi_stix_cyber_observables
octi_stix_domain_objects
octi_stix_meta_objects
octi_stix_meta_relationships
octi_stix_sighting_relationships
Here is the overview of all templates (you should have something with octi_
instead of opencti_
).
Then, going back in the index lifecycle policies screen, you can click on the \"+\" button of the octi-ilm-policy
to Add the policy to index template
, then add the policy to add previously created template with the proper \"Alias for rollover index\".
Before we can re-index, we need to create the new indices with aliases.
PUT octi_history-000001\n{\n \"aliases\": {\n \"octi_history\": {\n \"is_write_index\": true\n }\n }\n}\n
Repeat this step for all indices:
octi_history
octi_inferred_entities
octi_inferred_relationships
octi_internal_objects
octi_internal_relationships
octi_stix_core_relationships
octi_stix_cyber_observable_relationships
octi_stix_cyber_observables
octi_stix_domain_objects
octi_stix_meta_objects
octi_stix_meta_relationships
Using the reindex
API, re-index all indices one by one:
curl -X POST \"localhost:9200/_reindex?pretty\" -H 'Content-Type: application/json' -d'\n{\n \"source\": {\n \"index\": \"opencti_history-000001\"\n },\n \"dest\": {\n \"index\": \"octi_history\"\n }\n}\n'\n
You will see the rollover policy to be applied and the new indices are automatically rolled-over during reindexation.
"},{"location":"deployment/rollover/#delete-all-old-indices","title":"Delete all old indices","text":"Then just delete all indices with the prefix opencti_
.
Start your platform, using the new indices.
Rollover documentation
To have more details about automatic rollover and lifecycle policies, please read the official ElasticSearch documentation.
"},{"location":"deployment/troubleshooting/","title":"Troubleshooting","text":"This page aims to explains the typical errors you can have with your OpenCTI platform.
"},{"location":"deployment/troubleshooting/#finding-the-relevant-logs","title":"Finding the relevant logs","text":"It is highly recommended to monitor the error logs of the platforms, workers and connectors. All the components have log outputs in an understandable JSON format. It necessary, it is always possible to increase the log level. In production, it is recommended to have the log level set to error
.
Here are some useful parameters for platform logging:
- APP__APP_LOGS__LOGS_LEVEL=[error|warning|info|debug]\n- APP__APP_LOGS__LOGS_CONSOLE=true # Output in the container console\n
"},{"location":"deployment/troubleshooting/#connectors","title":"Connectors","text":"All connectors support the same set of parameters to manage the log level and outputs:
- OPENCTI_JSON_LOGGING=true # Enable / disable JSON logging\n- CONNECTOR_LOG_LEVEL=info=[error|warning|info|debug]\n
"},{"location":"deployment/troubleshooting/#workers","title":"Workers","text":"The workers can have more or less verbose outputs:
- OPENCTI_JSON_LOGGING=true # Enable / disable JSON logging\n- WORKER_LOG_LEVEL=[error|warning|info|debug]\n
"},{"location":"deployment/troubleshooting/#common-errors","title":"Common errors","text":""},{"location":"deployment/troubleshooting/#ingestion-technical-errors","title":"Ingestion technical errors","text":"Missing reference to handle creation
After 5 retries, if an element required to create another element is missing, the platform raises an exception. It usually comes from a connector that generates inconsistent STIX 2.1 bundles.
Cant upsert entity. Too many entities resolved
OpenCTI received an entity which is matching too many other entities in the platform. In this condition we cannot take a decision. We need to dig into the data bundle to identify why he match too much entities and fix the data in the bundle / or the platform according to what you expect.
Execution timeout, too many concurrent call on the same entities
The platform supports multi workers and multiple parallel creation but different parameters can lead to some locking timeout in the execution.
If you have this kind of error, limit the number of workers deployed. Try to find the right balance of the number of workers, connectors and elasticsearch sizing.
"},{"location":"deployment/troubleshooting/#ingestion-functional-errors","title":"Ingestion functional errors","text":"Indicator of type yara is not correctly formatted
OpenCTI check the validity of the indicator rule.
Observable of type IPv4-Addr is not correctly formatted
OpenCTI check the validity of the oversable value.
"},{"location":"deployment/troubleshooting/#dependencies-errors","title":"Dependencies errors","text":"TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark...
Disk full, no space left on the device for ElasticSearch.
"},{"location":"deployment/upgrade/","title":"Upgrade","text":"Depending on your installation mode, upgrade path may change.
Migrations
The platform is taking care of all necessary underlying migrations in the databases if any, you can upgrade OpenCTI from any version to the latest one, including skipping multiple major releases.
"},{"location":"deployment/upgrade/#using-docker","title":"Using Docker","text":"Before applying this procedure, please update your docker-compose.yml
file with the new version number of container images.
$ sudo docker-compose stop\n$ sudo docker-compose pull\n$ sudo docker-compose up -d\n
"},{"location":"deployment/upgrade/#for-docker-swarm","title":"For Docker swarm","text":"For each of services, you have to run the following command:
$ sudo docker service update --force service_name\n
"},{"location":"deployment/upgrade/#manual-installation","title":"Manual installation","text":"When upgrading the platform, you have to replace all files and restart the platform, the database migrations will be done automatically:
$ yarn serv\n
"},{"location":"development/api-usage/","title":"GraphQL API and playground","text":"Under construction
We are doing our best to complete this page. If you want to participate, don't hesitate to join the Filigran Community on Slack or submit your pull request on the Github doc repository.
"},{"location":"development/connectors/","title":"Connector development","text":""},{"location":"development/connectors/#introduction","title":"Introduction","text":"A connector in OpenCTI is a service that runs next to the platform and can be implemented in almost any programming language that has STIX2 support. Connectors are used to extend the functionality of OpenCTI and allow operators to shift some of the processing workload to external services. To use the conveniently provided OpenCTI connector SDK you need to use Python3 at the moment.
We choose to have a very decentralized approach on connectors, in order to bring a maximum freedom to developers and vendors. So a connector on OpenCTI can be defined by a standalone Python 3 process that pushes an understandable format of data to an ingestion queue of messages.
Each connector must implement a long-running process that can be launched just by executing the main Python file. The only mandatory dependency is the OpenCTIConnectorHelper
class that enables the connector to send data to OpenCTI.
In the beginning first think about your use-case to choose and appropriate connector type - what do want to achieve with your connector? The following table gives you an overview of the current connector types and some typical use-cases:
Connector types
Type Typical use cases Example connector EXTERNAL_IMPORT Integrate external TI provider, Integrate external TI platform AlienVault INTERNAL_ENRICHMENT Enhance existing data with additional knowledge AbuseIP INTERNAL_IMPORT_FILE (Bulk) import knowledge from files Import document INTERNAL_EXPORT_FILE (Bulk) export knowledge to files STIX 2.1, CSV. STREAM Integrate external TI provider, Integrate external TI platform Elastic SecurityAfter you've selected your connector type make yourself familiar with STIX2 and the supported relationships in OpenCTI. Having some knowledge about the internal data models with help you a lot with the implementation of your idea.
"},{"location":"development/connectors/#preparation","title":"Preparation","text":""},{"location":"development/connectors/#environment-setup","title":"Environment Setup","text":"To develop and test your connector, you need a running OpenCTI instance with the frontend and the messaging broker accessible. If you don't plan on developing anything for the OpenCTI platform or the frontend, the easiest setup for the connector development is using the docker setup, For more details see here.
"},{"location":"development/connectors/#coding-setup","title":"Coding Setup","text":"To give you an easy starting point we prepared an example connector in the public repository you can use as template to bootstrap your development.
Some prerequisites we recommend to follow this tutorial:
In the terminal check out the connectors repository and copy the template connector to $myconnector
(replace it with your name throughout the following text examples).
$ pip3 install black flake8 pycti\n# Fork the current repository, then clone your fork\n$ git clone https://github.com/YOUR-USERNAME/connectors.git\n$ cd connectors\n$ git remote add upstream https://github.com/OpenCTI-Platform/connectors.git\n# Create a branch for your feature/fix\n$ git checkout -b [branch-name]\n$ cp -r template $connector_type/$myconnector\n$ cd $connector_type/$myconnector\n$ tree .\n.\n\u251c\u2500\u2500 docker-compose.yml\n\u251c\u2500\u2500 Dockerfile\n\u251c\u2500\u2500 entrypoint.sh\n\u251c\u2500\u2500 README.md\n\u2514\u2500\u2500 src\n \u251c\u2500\u2500 config.yml.sample\n \u251c\u2500\u2500 main.py\n \u2514\u2500\u2500 requirements.txt\n\n1 directory, 7 files\n
"},{"location":"development/connectors/#changing-the-template","title":"Changing the template","text":"There are a few files in the template we need to change for our connector to be unique. You can check for all places you need to change you connector name with the following command (the output will look similar):
$ grep -Ri template .\n\nREADME.md:# OpenCTI Template Connector\nREADME.md:| `connector_type` | `CONNECTOR_TYPE` | Yes | Must be `Template_Type` (this is the connector type). |\nREADME.md:| `connector_name` | `CONNECTOR_NAME` | Yes | Option `Template` |\nREADME.md:| `connector_scope` | `CONNECTOR_SCOPE` | Yes | Supported scope: Template Scope (MIME Type or Stix Object) |\nREADME.md:| `template_attribute` | `TEMPLATE_ATTRIBUTE` | Yes | Additional setting for the connector itself |\ndocker-compose.yml: connector-template:\ndocker-compose.yml: image: opencti/connector-template:4.5.5\ndocker-compose.yml: - CONNECTOR_TYPE=Template_Type\ndocker-compose.yml: - CONNECTOR_NAME=Template\ndocker-compose.yml: - CONNECTOR_SCOPE=Template_Scope # MIME type or Stix Object\nentrypoint.sh:cd /opt/opencti-connector-template\nDockerfile:COPY src /opt/opencti-template\nDockerfile: cd /opt/opencti-connector-template && \\\nsrc/main.py:class Template:\nsrc/main.py: \"TEMPLATE_ATTRIBUTE\", [\"template\", \"attribute\"], config, True\nsrc/main.py: connectorTemplate = Template()\nsrc/main.py: connectorTemplate.run()\nsrc/config.yml.sample: type: 'Template_Type'\nsrc/config.yml.sample: name: 'Template'\nsrc/config.yml.sample: scope: 'Template_Scope' # MIME type or SCO\n
Required changes:
Template
or template
mentions to your connector name e.g. ImportCsv
or importcsv
TEMPLATE
mentions to your connector name e.g. IMPORTCSV
Template_Scope
mentions to the required scope of your connector. For processing imported files, that can be the Mime type e.g. application/pdf
or for enriching existing information in OpenCTI, define the STIX object's name e.g. Report
. Multiple scopes can be separated by a simple ,
Template_Type
to the connector type you wish to develop. The OpenCTI types (OpenCTI flags) are defined in this table.After getting the configuration parameters of your connector, you have to initialize the OpenCTI connector helper by using the pycti
Python library. This is shown in the following example:
class TemplateConnector:\ndef __init__(self):\n# Instantiate the connector helper from config\nconfig_file_path = os.path.dirname(os.path.abspath(__file__)) + \"/config.yml\"\nconfig = (\nyaml.load(open(config_file_path), Loader=yaml.SafeLoader)\nif os.path.isfile(config_file_path)\nelse {}\n)\nself.helper = OpenCTIConnectorHelper(config)\nself.custom_attribute = get_config_variable(\n\"TEMPLATE_ATTRIBUTE\", [\"template\", \"attribute\"], config\n)\n
Since there are some basic differences in the tasks of the different connector classes, the structure is also a bit class dependent. While the external-import and the stream connector run independently in a regular interval or constantly, the other 3 connector classes only run when being requested by the OpenCTI platform.
The self-triggered connectors run independently, but the OpenCTI need to define a callback function, which can be executed for the connector to start its work. This is done via self.helper.listen(self._process_message)
. In the appended examples, the difference of the setup can be seen.
Self-triggered Connectors
OpenCTI triggered
from pycti import OpenCTIConnectorHelper, get_config_variable\nclass TemplateConnector:\ndef __init__(self) -> None:\n# Initialization procedures\n[...]\nself.template_interval = get_config_variable(\n\"TEMPLATE_INTERVAL\", [\"template\", \"interval\"], config, True\n)\ndef get_interval(self) -> int:\nreturn int(self.template_interval) * 60 * 60 * 24\ndef run(self) -> None:\n# Main procedure\nif __name__ == \"__main__\":\ntry:\ntemplate_connector = TemplateConnector()\ntemplate_connector.run()\nexcept Exception as e:\nprint(e)\ntime.sleep(10)\nexit(0)\n
from pycti import OpenCTIConnectorHelper, get_config_variable\nclass TemplateConnector:\ndef __init__(self) -> None:\n# Initialization procedures\n[...]\ndef _process_message(self, data: dict) -> str:\n# Main procedure \n# Start the main loop\ndef start(self) -> None:\nself.helper.listen(self._process_message)\nif __name__ == \"__main__\":\ntry:\ntemplate_connector = TemplateConnector()\ntemplate_connector.start()\nexcept Exception as e:\nprint(e)\ntime.sleep(10)\nexit(0)\n
"},{"location":"development/connectors/#write-and-read-operations","title":"Write and Read Operations","text":"When using the OpenCTIConnectorHelper
class, there are two way for reading from or writing data to the OpenCTI platform.
self.helper.api
self.send_stix2_bundle
The recommended way for creating or updating data in the OpenCTI platform is via the OpenCTI worker. This enables the connector to just send and forget about thousands of entities at once to without having to think about the ingestion order, performance or error handling.
\u26a0\ufe0f **Please DO NOT use the api interface to create new objects in connectors.**The OpenCTI connector helper method send_stix2_bundle
must be used to send data to OpenCTI. The send_stix2_bundle
function takes 2 arguments.
string
(mandatory)list
of entities types that should be ingested (optional)Here is an example using the STIX2 Python library:
from stix2 import Bundle, AttackPattern\n[...]\nattack_pattern = AttackPattern(name='Evil Pattern')\nbundle_objects = []\nbundle_objects.append(attack_pattern)\nbundle = Bundle(objects=bundle_objects).serialize()\nbundles_sent = self.opencti_connector_helper.send_stix2_bundle(bundle)\n
"},{"location":"development/connectors/#reading-from-the-opencti-platform","title":"Reading from the OpenCTI platform","text":"Read queries to the OpenCTI platform can be achieved using the API and the STIX IDs can be attached to reports to create the relationship between those two entities.
entity = self.helper.api.vulnerability.read(\nfilters={\"key\": \"name\", \"values\": [\"T1234\"]}\n)\n
If you want to add the found entity via objects_refs
to another SDO, simple add a list of stix_ids
to the SDO. Here's an example using the entity from the code snippet above:
from stix2 import Report\n[...]\nreport = Report(\nid=report[\"standard_id\"],\nobject_refs=[entity[\"standard_id\"]],\n)\n
"},{"location":"development/connectors/#logging","title":"Logging","text":"When something crashes at a user's, you as a developer want to know as much as possible about this incident to easily improve your code and remove this issue. To do so, it is very helpful if your connector documents what it does. Use info
messages for big changes like the beginning or the finishing of an operation, but to facilitate your bug removal attempts, implement debug
messages for minor operation changes to document different steps in your code.
When encountering a crash, the connector's user can easily restart the troubling connector with the debug logging activated.
CONNECTOR_LOG_LEVEL=debug
Using those additional log messages, the bug report is more enriched with information about the possible cause of the problem. Here's an example of how the logging should be implemented:
def run(self) -> None:\nself.helper.log_info('Template connector starts')\nresults = self._ask_for_news()\n[...]\ndef _ask_for_news() -> None:\noverall = []\nfor i in range(0, 10):\nself.log_debug(f\"Asking about news with count '{i}'\")\n# Do something\nself.log_debug(f\"Resut: '{result}'\")\noverall.append(result)\nreturn overall\n
Please make sure that the debug messages rich of useful information, but that they are not redundant and that the user is not drowned by unnecessary information.
"},{"location":"development/connectors/#additional-implementations","title":"Additional implementations","text":"If you are still unsure about how to implement certain things in your connector, we advise you to have a look at the code of other connectors of the same type. Maybe they are already using approach which is suitable for addressing to your problem.
"},{"location":"development/connectors/#opencti-triggered-connector-special-cases","title":"OpenCTI triggered Connector - Special cases","text":""},{"location":"development/connectors/#data-layout-of-dictionary-from-callback-function","title":"Data Layout of Dictionary from Callback function","text":"OpenCTI sends the connector a few instructions via the data
dictionary in the callback function. Depending on the connector type, the data dictionary content is a bit different. Here are a few examples for each connector type.
Internal Import Connector
Internal Enrichment Connector
{ \"file_id\": \"<fileId>\",\n\"file_mime\": \"application/pdf\", \"file_fetch\": \"storage/get/<file_id>\", // Path to get the file\n\"entity_id\": \"report--82843863-6301-59da-b783-fe98249b464e\", // Context of the upload\n}\n
{ \"entity_id\": \"<stixCoreObjectId>\" // StixID of the object wanting to be enriched\n}\n
Internal Export Connector
{ \"export_scope\": \"single\", // 'single' or 'list'\n\"export_type\": \"simple\", // 'simple' or 'full'\n\"file_name\": \"<fileName>\", // Export expected file name\n\"max_marking\": \"<maxMarkingId>\", // Max marking id\n\"entity_type\": \"AttackPattern\", // Exported entity type\n// ONLY for single entity export\n\"entity_id\": \"<entity.id>\", // Exported element\n// ONLY for list entity export\n\"list_params\": \"[<parameters>]\" // Parameters for finding entities\n}\n
"},{"location":"development/connectors/#self-triggered-connector-special-cases","title":"Self triggered Connector - Special cases","text":""},{"location":"development/connectors/#initiating-a-work-before-pushing-data","title":"Initiating a 'Work' before pushing data","text":"For self-triggered connectors, OpenCTI has to be told about new jobs to process and to import. This is done by registering a so called work
before sending the stix bundle and signalling the end of a work. Here an example:
By implementing the work registration, they will show up as shown in this screenshot for the MITRE ATT&CK connector:
def run() -> None:\n# Anounce upcoming work\ntimestamp = int(time.time())\nnow = datetime.utcfromtimestamp(timestamp)\nfriendly_name = \"Template run @ \" + now.strftime(\"%Y-%m-%d %H:%M:%S\")\nwork_id = self.helper.api.work.initiate_work(\nself.helper.connect_id, friendly_name\n)\n[...]\n# Send Stix bundle\nself.helper.send_stix2_bundle(\nbundle,\nentities_types=self.helper.connect_scope,\nupdate=True,\nwork_id=work_id,\n)\n# Finish the work\nself.helper.log_info(\nf\"Connector successfully run, storing last_run as {str(timestamp)}\"\n) \nmessage = \"Last_run stored, next run in: {str(round(self.get_interval() / 60 / 60 / 24, 2))} days\"\nself.helper.api.work.to_processed(work_id, message)\n
"},{"location":"development/connectors/#interval-handling","title":"Interval handling","text":"The connector is also responsible for making sure that it runs in certain intervals. In most cases, the intervals are definable in the connector config and then only need to be set and updated during the runtime.
class TemplateConnector:\ndef __init__(self) -> None:\n# Initialization procedures\n[...]\nself.template_interval = get_config_variable(\n\"TEMPLATE_INTERVAL\", [\"template\", \"interval\"], config, True\n)\ndef get_interval(self) -> int:\nreturn int(self.template_interval) * 60 * 60 * 24\ndef run(self) -> None:\nself.helper.log_info(\"Fetching knowledge...\")\nwhile True:\ntry:\n# Get the current timestamp and check\ntimestamp = int(time.time())\ncurrent_state = self.helper.get_state()\nif current_state is not None and \"last_run\" in current_state:\nlast_run = current_state[\"last_run\"]\nself.helper.log_info(\n\"Connector last run: \"\n+ datetime.utcfromtimestamp(last_run).strftime(\n\"%Y-%m-%d %H:%M:%S\"\n)\n)\nelse:\nlast_run = None\nself.helper.log_info(\"Connector has never run\")\n# If the last_run is more than interval-1 day\nif last_run is None or (\n(timestamp - last_run)\n> ((int(self.template_interval) - 1) * 60 * 60 * 24)\n):\ntimestamp = int(time.time())\nnow = datetime.utcfromtimestamp(timestamp)\nfriendly_name = \"Connector run @ \" + now.strftime(\"%Y-%m-%d %H:%M:%S\")\n###\n# RUN CODE HERE \n###\n# Store the current timestamp as a last run\nself.helper.log_info(\n\"Connector successfully run, storing last_run as \"\n+ str(timestamp)\n)\nself.helper.set_state({\"last_run\": timestamp})\nmessage = (\n\"Last_run stored, next run in: \"\n+ str(round(self.get_interval() / 60 / 60 / 24, 2))\n+ \" days\"\n)\nself.helper.api.work.to_processed(work_id, message)\nself.helper.log_info(message)\ntime.sleep(60)\nelse:\nnew_interval = self.get_interval() - (timestamp - last_run)\nself.helper.log_info(\n\"Connector will not run, next run in: \"\n+ str(round(new_interval / 60 / 60 / 24, 2))\n+ \" days\"\n)\ntime.sleep(60)\n
"},{"location":"development/connectors/#running-the-connector","title":"Running the connector","text":"For development purposes, it is easier to simply run the python script locally until everything works as it sould.
$ virtualenv env\n$ source ./env/bin/activate\n$ pip3 install -r requirements\n$ cp config.yml.sample config.yml\n# Define the opencti url and token, as well as the connector's id\n$ vim config.yml\n$ python3 main.py\nINFO:root:Listing Threat-Actors with filters null.\nINFO:root:Connector registered with ID: a2de809c-fbb9-491d-90c0-96c7d1766000\nINFO:root:Starting ping alive thread\n...\n
"},{"location":"development/connectors/#final-testing","title":"Final Testing","text":"Before submitting a Pull Request, please test your code for different use cases and scenarios. We don't have an automatic testing suite for the connectors yet, thus we highly depend on developers thinking about creative scenarios their code could encounter.
"},{"location":"development/connectors/#prepare-for-release","title":"Prepare for release","text":"If you plan to provide your connector to be used by the community (\u2764\ufe0f) your code should pass the following (minimum) criteria.
# Linting with flake8 contains no errors or warnings\n$ flake8 --ignore=E,W\n# Verify formatting with black\n$ black .\nAll done! \u2728 \ud83c\udf70 \u2728\n1 file left unchanged.\n# Verify import sorting\n$ isort --profile black .\nFixing /path/to/connector/file.py\n# Push you feature/fix on Github\n$ git add [file(s)]\n$ git commit -m \"[connector_name] descriptive message\"\n$ git push origin [branch-name]\n# Open a pull request with the title \"[connector_name] message\"\n
If you have any trouble with this just reach out to the OpenCTI core team. We are happy to assist with this.
"},{"location":"development/environment_ubuntu/","title":"Prerequisites Ubuntu","text":"Development stack require some base software that need to be installed.
"},{"location":"development/environment_ubuntu/#docker-or-podman","title":"Docker or podman","text":"Platform dependencies in development are deployed through container management, so you need to install a container stack.
We currently support docker and postman.
$ sudo apt-get install docker docker-compose curl\n
As OpenCTI has a dependency to ElasticSearch, you have to set the vm.max_map_count before running the containers, as mentioned in the ElasticSearch documentation.
$ sudo sysctl -w vm.max_map_count=262144\n
"},{"location":"development/environment_ubuntu/#nodejs-and-yarn","title":"NodeJS and yarn","text":"The platform is developed on nodejs technology, so you need to install node and the yarn package manager.
$ sudo apt-get install nodejs\n$ sudo curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | sudo apt-key add -\n$ sudo echo \"deb https://dl.yarnpkg.com/debian/ stable main\" | sudo tee /etc/apt/sources.list.d/yarn.list\n$ sudo apt-get update && sudo apt-get install yarn\n
"},{"location":"development/environment_ubuntu/#python-runtime","title":"Python runtime","text":"For worker and connectors, a python runtime is needed.
$ sudo apt-get install python3 python3-pip\n
"},{"location":"development/environment_ubuntu/#git-and-dev-tool","title":"Git and dev tool","text":"$ sudo apt-get install git-all\n
Development stack require some base software that need to be installed.
"},{"location":"development/environment_windows/#docker-or-podman","title":"Docker or podman","text":"Platform dependencies in development are deployed through container management, so you need to install a container stack.
We currently support docker and postman.
Docker Desktop from - https://docs.docker.com/desktop/install/windows-install/
wsl --set-default-version 2
The platform is developed on nodejs technology, so you need to install node and the yarn package manager.
Shell out to CMD prompt as Administrator and install/run:
pip3 install pywin32
Configure Yarn (https://yarnpkg.com/getting-started/install)
corepack enable
For worker and connectors, a python runtime is needed. Even if you already have a python runtime installed through node installation, on windows some nodejs package will be recompiled with python and C++ runtime.
For this reason Visual Studio Build Tools is required.
Just use defaults on each screen
Install your preferred IDE
This summary should give you a detailed setup description for initiating the OpenCTI setup environment necessary for developing on the OpenCTI platform, a client library or the connectors. This page document how to set up an \"All-in-One\" development environment for OpenCTI. The devenv will contain data of 3 different repositories:
Contains the platform OpenCTI project code base:
~/opencti/opencti-platform/opencti-dev
~/opencti/opencti-platform/opencti-graphql
~/opencti/opencti-platform/opencti-frontend
~/opencti/opencti-worker
Contains a lot of developed connectors, as a source of inspiration for your new connector.
"},{"location":"development/platform/#client-python","title":"Client python","text":"Contains the source code of the python library used in worker or connectors.
"},{"location":"development/platform/#prerequisites","title":"Prerequisites","text":"Some tools are needed before starting to develop. Please check Ubuntu prerequisites or Windows prerequisites
"},{"location":"development/platform/#clone-the-projects","title":"Clone the projects","text":"Fork and clone the git repositories
In development dependencies are deployed trough containers. A development compose file is available in ~/opencti/opencti-platform/opencti-dev
cd ~/docker\n#Start the stack in background\ndocker-compose -f ./docker-compose-dev.yml up -d\n
You have now all the dependencies of OpenCTI running and waiting for product to run.
"},{"location":"development/platform/#backend-api","title":"Backend / API","text":""},{"location":"development/platform/#python-virtual-env","title":"Python virtual env","text":"The GraphQL API is developed in JS and with some python code. As it's an \"all-in-one\" installation, the python environment will be installed in a virtual environment.
cd ~/opencti/opencti-platform/opencti-graphql\npython3 -m venv .venv --prompt \"graphql\"\nsource .venv/bin/activate\npip install --upgrade pip wheel setuptools\nyarn install\nyarn install:python deactivate\n
"},{"location":"development/platform/#development-configuration","title":"Development configuration","text":"The API can be specifically configured with files depending on the starting profile. By default, the default.json file is used and will be correctly configured for local usage except for admin password
So you need to create a development profile file. You can duplicate the default file and adapt if for you need.
cd ~/opencti/opencti-platform/opencti-graphql/config\ncp default.json development.json\n
At minimum adapt the admin part for the password and token.
\"admin\": {\n\"email\": \"admin@opencti.io\",\n\"password\": \"MyNewPassord\",\n\"token\": \"UUID generated with https://www.uuidgenerator.net\"\n}\n
"},{"location":"development/platform/#install-start","title":"Install / start","text":"Before starting the backend you need to install the nodejs modules
cd ~/opencti/opencti-platform/opencti-graphql\nyarn install\n
Then you can simply start the backend API with the yarn start command
cd ~/opencti/opencti-platform/opencti-graphql\nyarn start\n
The platform will start logging some interesting information
{\"category\":\"APP\",\"level\":\"info\",\"message\":\"[OPENCTI] Starting platform\",\"timestamp\":\"2023-07-02T16:37:10.984Z\",\"version\":\"5.8.7\"}\n{\"category\":\"APP\",\"level\":\"info\",\"message\":\"[OPENCTI] Checking dependencies statuses\",\"timestamp\":\"2023-07-02T16:37:10.987Z\",\"version\":\"5.8.7\"}\n{\"category\":\"APP\",\"level\":\"info\",\"message\":\"[SEARCH] Elasticsearch (8.5.2) client selected / runtime sorting enabled\",\"timestamp\":\"2023-07-02T16:37:11.014Z\",\"version\":\"5.8.7\"}\n{\"category\":\"APP\",\"level\":\"info\",\"message\":\"[CHECK] Search engine is alive\",\"timestamp\":\"2023-07-02T16:37:11.015Z\",\"version\":\"5.8.7\"}\n...\n{\"category\":\"APP\",\"level\":\"info\",\"message\":\"[INIT] Platform initialization done\",\"timestamp\":\"2023-07-02T16:37:11.622Z\",\"version\":\"5.8.7\"}\n{\"category\":\"APP\",\"level\":\"info\",\"message\":\"[OPENCTI] API ready on port 4000\",\"timestamp\":\"2023-07-02T16:37:12.382Z\",\"version\":\"5.8.7\"}\n
If you want to start on another profile you can use the -e parameter. For example here to use the profile.json configuration file.
yarn start -e profile\n
"},{"location":"development/platform/#code-check","title":"Code check","text":"Before pushing your code you need to validate the syntax and ensure the testing will be validated.
"},{"location":"development/platform/#for-validation","title":"For validation","text":"yarn lint
yarn check-ts
For starting the test you will need to create a test.json file. You can use the same dependencies by only adapting all prefix for all dependencies.
yarn test:dev
Before starting the backend you need to install the nodejs modules
cd ~/opencti/opencti-platform/opencti-front\nyarn install\n
Then you can simply start the frontend with the yarn start command
cd ~/opencti/opencti-platform/opencti-front\nyarn start\n
The frontend will start with some interesting information
[INFO] [default] compiling...\n[INFO] [default] compiled documents: 1592 reader, 1072 normalization, 1596 operation text\n[INFO] Compilation completed.\n[INFO] Done.\n[HPM] Proxy created: /stream -> http://localhost:4000\n[HPM] Proxy created: /storage -> http://localhost:4000\n[HPM] Proxy created: /taxii2 -> http://localhost:4000\n[HPM] Proxy created: /feeds -> http://localhost:4000\n[HPM] Proxy created: /graphql -> http://localhost:4000\n[HPM] Proxy created: /auth/** -> http://localhost:4000\n[HPM] Proxy created: /static/flags/** -> http://localhost:4000\n
The web UI should be accessible on http://127.0.0.1:3000
"},{"location":"development/platform/#code-check_1","title":"Code check","text":"Before pushing your code you need to validate the syntax and ensure the testing will be validated.
"},{"location":"development/platform/#for-validation_1","title":"For validation","text":"yarn lint
yarn check-ts
yarn test
Running a worker is required when you want to develop on the ingestion or import/export connectors.
"},{"location":"development/platform/#python-virtual-env_1","title":"Python virtual env","text":"cd ~/opencti/opencti-worker/src\npython3 -m venv .venv --prompt \"worker\"\nsource .venv/bin/activate\npip3 install --upgrade pip wheel setuptools\npip3 install -r requirements.txt\ndeactivate\n
"},{"location":"development/platform/#install-start_2","title":"Install / start","text":"cd ~/opencti/opencti-worker/src\nsource .venv/bin/activate\npython worker.py\n
"},{"location":"development/platform/#connectors_1","title":"Connectors","text":"For connectors development, please take a look to Connectors development dedicated page.
"},{"location":"development/platform/#production-build","title":"Production build","text":"Based on development source you can build the package for production. This package will be minified and optimized with esbuild.
$ cd opencti-frontend\n$ yarn build\n$ cd ../opencti-graphql\n$ yarn build\n
After the build you can start the production build with yarn serv. This build will use the production.json configuration file
$ cd ../opencti-graphql\n$ yarn serv\n
"},{"location":"development/python/","title":"Python library","text":"Under construction
We are doing our best to complete this page. If you want to participate, don't hesitate to join the Filigran Community on Slack or submit your pull request on the Github doc repository.
"},{"location":"reference/api/","title":"Knowledge graph","text":"Under construction
We are doing our best to complete this page. If you want to participate, don't hesitate to join the Filigran Community on Slack or submit your pull request on the Github doc repository.
"},{"location":"reference/csv-feeds/","title":"CSV feeds","text":"Under construction
We are doing our best to complete this page. If you want to participate, don't hesitate to join the Filigran Community on Slack or submit your pull request on the Github doc repository.
"},{"location":"reference/data-intelligence/","title":"Data intelligence","text":"Under construction
We are doing our best to complete this page. If you want to participate, don't hesitate to join the Filigran Community on Slack or submit your pull request on the Github doc repository.
"},{"location":"reference/data-model/","title":"Data model","text":"Under construction
We are doing our best to complete this page. If you want to participate, don't hesitate to join the Filigran Community on Slack or submit your pull request on the Github doc repository.
"},{"location":"reference/graph/","title":"Knowledge graph","text":"Under construction
We are doing our best to complete this page. If you want to participate, don't hesitate to join the Filigran Community on Slack or submit your pull request on the Github doc repository.
"},{"location":"reference/security/","title":"Security","text":"Under construction
We are doing our best to complete this page. If you want to participate, don't hesitate to join the Filigran Community on Slack or submit your pull request on the Github doc repository.
"},{"location":"reference/streaming/","title":"Data Streaming","text":""},{"location":"reference/streaming/#presentation","title":"Presentation","text":"In order to provide a real time way to consume STIX CTI information, OpenCTI provides data events in a stream that can be consume to react on creation, update, deletion and merge. This way of getting information out of OpenCTI is highly efficient and already use by some connectors.
"},{"location":"reference/streaming/#technology","title":"Technology","text":""},{"location":"reference/streaming/#redis-stream","title":"Redis stream","text":"OpenCTI is currently using REDIS Stream (See https://redis.io/topics/streams-intro) as the technical layer. Each time something is modified in the OpenCTI database, a specific event is added in the stream.
"},{"location":"reference/streaming/#sse-protocol","title":"SSE protocol","text":"In order to provides a really easy consuming protocol we decide to provide a SSE (https://fr.wikipedia.org/wiki/Server-sent_events) http URL linked to the standard login system of OpenCTI. Any user with the correct access rights can open and access http://opencti_instance/stream and open an SSE connection to start receiving live events. You can of course consume directly the stream in Redis but you will have to manage access and rights directly.
"},{"location":"reference/streaming/#events-format","title":"Events format","text":"id: {Event stream id} -> Like 1620249512318-0\nevent: {Event type} -> create / update / delete\ndata: { -> The complete event data\n version -> The version number of the event\n type -> The inner type of the event\n scope -> The scope of the event [internal or external]\n data: {STIX data} -> The STIX representation of the data.\n message -> A simple string to easy understand the event\n origin: {Data Origin} -> Complex object with different information about the origin of the event\n context: {Event context} -> Complex object with meta information depending of the event type\n}\n
Id can be used to consume the stream from this specific point.
"},{"location":"reference/streaming/#stix-data","title":"STIX data","text":"The current stix data representation is based on the STIX 2.1 format using extension mechanism. Please take a look to https://docs.oasis-open.org/cti/stix/v2.1/stix-v2.1.html for more information.
"},{"location":"reference/streaming/#create","title":"Create","text":"Its simply the data created in STIX format.
"},{"location":"reference/streaming/#delete","title":"Delete","text":"Its simply the data in STIX format just before his deletion. You will also find the automated deletions in context due to automatic dependency management.
{\n\"context\": {\n\"deletions\": [{STIX data}]\n}\n}\n
"},{"location":"reference/streaming/#update","title":"Update","text":"This event type publish the complete STIX data information along with patches information. Thanks to the patches, its possible to rebuild the previous version and easily understand that happens in the update. patch and reverse_patch follow the official jsonpatch specification. You can find more information at https://jsonpatch.com/
{\n\"context\": {\n\"patch\": [/* patch operation object */],\n\"reverse_patch\": [/* patch operation object */]\n}\n}\n
"},{"location":"reference/streaming/#merge","title":"Merge","text":"Merge is a mix of an update of the merge targets and deletions of the sources. In this event you will find the same patch and reverse_patch as an update and the list of elements merged into the target in the \"sources\" attribute.
{\n\"context\": {\n\"patch\": [/* patch operation object */],\n\"reverse_patch\": [/* patch operation object */],\n\"sources\": [{STIX data}]\n}\n}\n
"},{"location":"reference/streaming/#stream-types","title":"Stream types","text":"In OpenCTI we propose 2 types of streams.
"},{"location":"reference/streaming/#base-stream","title":"Base stream","text":"The stream hosted in /stream url contains all the raw events of the platform, always filtered by the user rights (marking based). It's a technical stream a bit complex to used but very useful for internal processing or some specific connectors like backup/restore. This stream is live by default but if you want to catchup you can simply add the from parameter to your query. This parameter accept a timestamp in millisecond and also an event id. Like http://localhost/stream?from=1620249512599
Stream size?
The raw stream is really important in the platform and needs te be sized according to the period of retention you want to ensure. More retention you will have, more security about reprocessing the past information you will get. We usually recommand 1 month of retention, that usually match 2 000 000 of events. This limit can be configured with redis:trimming option, please check deployment configuration page.
"},{"location":"reference/streaming/#live-stream","title":"Live stream","text":"This stream aims to simplify your usage of the stream through the connectors, providing a way to create stream with specific filters through the UI. After creating this stream, is simply accessible from /stream/{STREAM_ID}.
It's very useful for various cases of data externalization, synchronization, like SPLUNK, TANIUM...
This stream provides different interesting mechanics:
If you want to dig in about the internal behavior you can check this complete diagram:
"},{"location":"reference/streaming/#general-options","title":"General options","text":"From and recover are 2 different options that need to be explains.
from (query parameter) is always the parameter that describe the initial date/event_id you want to start from. Can also be setup with request header from or last-event-id
recover (query parameter) is an option that let you consume the initial event from the database and not from the stream. Can also be setup with request header recover or recover-date
This difference will be transparent for the consumer but very important to get old information as an initial snapshot. This also let you consume information that is no longer in the stream retention period.
The next diagram will help you to understand the concept:
"},{"location":"reference/taxii-feeds/","title":"Taxii feeds","text":"Under construction
We are doing our best to complete this page. If you want to participate, don't hesitate to join the Filigran Community on Slack or submit your pull request on the Github doc repository.
"},{"location":"reference/taxonomy/","title":"Taxonomy","text":"Under construction
We are doing our best to complete this page. If you want to participate, don't hesitate to join the Filigran Community on Slack or submit your pull request on the Github doc repository.
"},{"location":"usage/automation/","title":"Playbooks Automation","text":"Enterprise edition
Playbooks automation is available under the \"Filigran Entreprise Edition\" license.
Please read the dedicated page to have all information
OpenCTI playbooks are flexible automation scenarios which can be fully customized and enabled by platform administrators to enrich, filter and modify the data created or updated in the platform.
Playbook automation is accessible in the user interface under Data/Processing/Playbooks.
You need the \"Manage credentials\" capability to use the Playbooks automation, because you will be able to manipulate data simple users cannot access.
You will then be able to:
Consider Playbook as STIX 2.1 bundle pipeline.
Initiating with a component listening to a data stream, each subsequent component in the playbook processes a received STIX bundle. These components have the ability to modify the bundle and subsequently transmit the altered result to connected components.
In this paradigm, components can send out the STIX 2.1 bundle to multiple components, enabling the development of multiple branches within your playbook.
A well-designed playbook end with a component executing an action based on the processed information. For instance, this may involve writing the STIX 2.1 bundle in a data stream.
Validate ingestion
The STIX bundle processed by the playbook won't be written in the platform without specifying it using the appropriate component, i.e. \"Send for ingestion\".
"},{"location":"usage/automation/#create-a-playbook","title":"Create a Playbook","text":"It is possible to create as many playbooks as needed which are running independently. You can give a name and description to each playbook.
The first step to define in the playbook is the \u201ctriggering event\u201d, which can be any knowledge event (create, update or delete) with customizable filters. To do so, click on the grey rectangle in the center of the workspace and choose the component to \"listen knowledge events\". Configure it with adequate filters. You can use same filters as in other part of the platform.
Then you have flexible choices for the next steps to:
Do not forget to start your Playbook when ready, with the Start option of the burger button placed near the name of your Playbook.
By clicking the burger button of a component, you can replace it by another one.
By clicking on the arrow icon in the bottom right corner of a component, you can develop a new branch at the same level.
By clicking the \"+\" button on a link between components, you can insert a component between the two.
"},{"location":"usage/automation/#components-of-playbooks","title":"Components of playbooks","text":""},{"location":"usage/automation/#log-data-in-standard-output","title":"Log data in standard output","text":"Will write the received STIX 2.1 bundle in platform logs with configurable log level and then send out the STIX 2.1 bundle unmodified.
"},{"location":"usage/automation/#send-for-ingestion","title":"Send for ingestion","text":"Will pass the STIX 2.1 bundle to be written in the data stream. This component has no output and should end a branch of your playbook.
"},{"location":"usage/automation/#filter-knowledge","title":"Filter Knowledge","text":"Will allow you to define filter and apply it to the received STIX 2.1 bundle. The component has 2 output, one for data matching the filter and one for the remainder. By default, filtering is applied to entities having triggered the playbook. You can toggle the corresponding option to apply it to all elements in the bundle (elements that might result from enrichment for example).
"},{"location":"usage/automation/#enrich-through-connector","title":"Enrich through connector","text":"Will send the received STIX 2.1 bundle to a compatible enrichement connector and send out the modifed bundle.
"},{"location":"usage/automation/#manipulate-knwoledge","title":"Manipulate Knwoledge","text":"Will add, replace or remove compatible attribute of the entities contains in the received STIX 2.1 bundle and send out the modified bundle. By default, modification is applied to entities having triggered the playbook. You can toggle the corresponding option to apply it to all elements in the bundle (elements that might result from enrichment for example).
"},{"location":"usage/automation/#container-wrapper","title":"Container wrapper","text":"Will modify the received STIX 2.1 bundle to include the entities into an container of the type you configured. By default, wrapping is applied to entities having triggered the playbook. You can toggle the corresponding option to apply it to all elements in the bundle (elements that might result from enrichment for example).
"},{"location":"usage/automation/#manage-sharing-with-organizations","title":"Manage sharing with organizations","text":"Will share every entity in the received STIX 2.1 bundle with Organizations you configured. Your platform need to have declare a platform main organization in Settings/Parameters.
"},{"location":"usage/automation/#apply-predefined-rule","title":"Apply predefined rule","text":"Will apply a complex automation built-in rule. This kind of rule might impact performance. Current rules are: * First/Last seen computing extension from report publication date: will populate first seen and last seen date of entities contained in the report based on its publication date. * Resolve indicators based on observables (add in bundle) * Resolve observables an indicator is based on (add in bundle) * Resolve container references (add in bundle)
"},{"location":"usage/automation/#send-to-notifier","title":"Send to notifier","text":"Will generate a Notification each time a STIX 2.1 bundle is received.
"},{"location":"usage/automation/#promote-observable-to-indicator","title":"Promote observable to indicator","text":"Will generate indicator based on observables contained in the received STIX 2.1 bundle. By default, it is applied to entities having triggered the playbook. You can toggle the corresponding option to apply it to all observables in the bundle (observables that might result from enrichment for example).
"},{"location":"usage/automation/#extract-observables-from-indicator","title":"Extract observables from indicator","text":"Will extract observables based on indicators contained in the received STIX 2.1 bundle. By default, it is applied to entities having triggered the playbook. You can toggle the corresponding option to apply it to all indicators in the bundle (indicators that might result from enrichment for example).
"},{"location":"usage/automation/#reduce-knowledge","title":"Reduce Knowledge","text":"Will elagate the received STIX 2.1 bundle based on the configured filter.
"},{"location":"usage/automation/#monitor-playbook-activity","title":"Monitor playbook activity","text":"At the top right of the interface, you can access execution trace of your playbook and consult the raw data after every step of your playbook execution.
"},{"location":"usage/background-tasks/","title":"Background tasks","text":"Three types of tasks are done in the background:
Rule tasks can be seen and activated in Settings > Customization > Rules engine. Knowledge and user tasks can be seen and managed in Data > Background Tasks. The scope of each task is indicated.
"},{"location":"usage/background-tasks/#rule-tasks","title":"Rule tasks","text":"If a rule task is enabled, it leads to the scan of the whole platform data and the creation of entities or relationships in case a configuration corresponds to the tasks rules. The created data are called 'inferred data'. Each time an event occurs in the platform, the rule engine checks if inferred data should be updated/created/deleted.
"},{"location":"usage/background-tasks/#knowledge-tasks","title":"Knowledge tasks","text":"Knowledge tasks are background tasks updating or deleting entities and correspond to mass operations on these data. To create one, select entities via the checkboxes in an entity list, and choose the action to perform via the toolbar.
"},{"location":"usage/background-tasks/#rights","title":"Rights","text":"User tasks are background tasks updating or deleting notifications. It can be done from the Notification section, by selecting several notifications via the checkboxes, and choosing an action via the toolbar.
"},{"location":"usage/background-tasks/#rights_1","title":"Rights","text":"Compiling CTI data in one place, deduplicate and correlate to transform it into Intelligence is very important. But ultimately, you need to act based on this Intelligence. Some situations will need to be taken care of, like cybersecurity incidents, requests for information or requests for takedown. Some actions will then need to be traced, to be coordinated and oversaw. Some actions will include feedback and content delivery.
OpenCTI includes Cases to allow organizations to manage situations and organize their team's work. Better, by doing Case management in OpenCTI, you handle your cases with all the context and Intelligence you need, at hand.
"},{"location":"usage/case-management/#how-to-manage-your-case-in-opencti","title":"How to manage your Case in OpenCTI?","text":"Multiple situations can be modelize in OpenCTI as a Case, either an Incident Response, a Request for Takedown or a Request for Information.
All Cases can contain any entities and relationships you need to represent the Intelligence context related to the situation. At the beginning of your case, you may find yourself with only some Observables sighted in a system. At the end, you may have Indicators, Threat Actor, impacted systems, attack patterns. All representing your findings, ready to be presented and exported as graph, pdf report, timeline, etc.
Some Cases may need some collaborative work and specific Tasks to be performed by people that have the skillset for. OpenCTI allows you to associate Tasks
in your Cases and assign them to users in the platform. As some type of situation may need the same tasks to be done, it is also possible to pre-define lists of tasks to be applied on your case. You can define these lists by accessing the Settings/Taxonomies/Case templates panel. Then you just need to add it from the overview of your desire Case.
Tip: A user can have a custom dashboard showing him all the tasks that have been assigned to him.
As with other objects in OpenCTI, you can also leverage the Notes
to add some investigation and analysis related comments, helping you shaping up the content of your case with unstructured data and trace all the work that have been done.
You can also use Opinions
to collect how the Case has been handled, helping you to build Lessons Learned.
To trace the evolution of your Case and define specific resolution worflows, you can use the Status
(that can be define in Settings/Taxonomies/Status templates).
At the end of your Case, you will certainly want to report on what have been done. OpenCTI allows you to export the content of the Case in a simple but customizable PDF (currently in refactor). But of course, your company have its own documents' templates, right? With OpenCTI, you will be able to include some nice graphics in it. For example, a Matrix view of the attacker attack pattern or even a graph display of how things are connected.
Also, we are currently working a more meaningfull Timeline view that will be possible to export too.
"},{"location":"usage/case-management/#use-case-example-a-suspicious-observable-is-sighted-by-a-defense-system-is-it-important","title":"Use case example: A suspicious observable is sighted by a defense system. Is it important?","text":"Sighting
relationship between your System \"SIEM permiter A\" and the Observable \"bad.com\". Incident
in this situation, and you have created an alert based on new Incident that send you email notification
and Teams message (webhook).campaign
targeting your activity sector
. \"bad.com\" is clearly something to investigate ASAP.Incident response
case. You position the priority to High, regarding the context, and the severity to Low, as you don't know yet if someone really interact with \"bad.com\"Task
in your case for verifying if an actual interaction happened with \"bad.com\".In the STIX 2.1 standard, some STIX Domain Objects (SDO) can be considered as \"container of knowledge\", using the object_refs
attribute to refer multiple other objects as nested references. In object_refs
, it is possible to refer to entities and relationships.
{\n\"type\": \"report\",\n\"spec_version\": \"2.1\",\n\"id\": \"report--84e4d88f-44ea-4bcd-bbf3-b2c1c320bcb3\",\n\"created_by_ref\": \"identity--a463ffb3-1bd9-4d94-b02d-74e4f1658283\",\n\"created\": \"2015-12-21T19:59:11.000Z\",\n\"modified\": \"2015-12-21T19:59:11.000Z\",\n\"name\": \"The Black Vine Cyberespionage Group\",\n\"description\": \"A simple report with an indicator and campaign\",\n\"published\": \"2016-01-20T17:00:00.000Z\",\n\"report_types\": [\"campaign\"],\n\"object_refs\": [\n\"indicator--26ffb872-1dd9-446e-b6f5-d58527e5b5d2\",\n\"campaign--83422c77-904c-4dc1-aff5-5c38f3a2c55c\",\n\"relationship--f82356ae-fe6c-437c-9c24-6b64314ae68a\"\n]\n}\n
In the previous example, we have a nested reference to 3 other objects:
\"object_refs\": [\n\"indicator--26ffb872-1dd9-446e-b6f5-d58527e5b5d2\",\n\"campaign--83422c77-904c-4dc1-aff5-5c38f3a2c55c\",\n\"relationship--f82356ae-fe6c-437c-9c24-6b64314ae68a\"\n]\n
"},{"location":"usage/containers/#implementation","title":"Implementation","text":""},{"location":"usage/containers/#types-of-container","title":"Types of container","text":"In OpenCTI, containers are displayed differently than other entities, because they contain pieces of knowledge. Here is the list of containers in the platform:
Type of entity STIX standard Description Report Native Reports are collections of threat intelligence focused on one or more topics, such as a description of a threat actor, malware, or attack technique, including context and related details. Grouping Native A Grouping object explicitly asserts that the referenced STIX Objects have a shared context, unlike a STIX Bundle (which explicitly conveys no context). Observed Data Native Observed Data conveys information about cyber security related entities such as files, systems, and networks using the STIX Cyber-observable Objects (SCOs). Note Native A Note is intended to convey informative text to provide further context and/or to provide additional analysis not contained in the STIX Objects. Opinion Native An Opinion is an assessment of the correctness of the information in a STIX Object produced by a different entity. Case Extension A case whether an Incident Response, a Request for Information or a Request for Takedown is use to convey an epic with a set of tasks. Task Extension A task, generally used in the context of case, is intended to convery information about something that must be done in a limited timeframe."},{"location":"usage/containers/#containers-behaviour","title":"Containers behaviour","text":"In the platform, it is always possible to visualize the list of entities and/or observables referenced in a container (Container > Entities or Observables
) but also to add / remove entities from the container.
As containers can also contain relationships, which are generally linked to the other entities in the container, it is also possible to visualize the container as a graph (Container > Knowledge
)
On the entity or the relationship side, you can always find all containers where the objecti is contained using the top menu Analysis
:
In all containers list, you can also filter containers based on one or multiple contained object(s):
"},{"location":"usage/dashboards/","title":"Custom dashboards","text":""},{"location":"usage/dashboards/#sharing-and-access-restriction","title":"Sharing and access restriction","text":"Organizations
, groups
, or users
who have access to a dashboard can have 3 levels of access: - admin
read, write, access management - edit
read and write - view
read-only
When a user creates a custom dashboard, it is only visible to themselves. They then have admin
access. They can then define who can access it and with what level of rights via the Manage access
button at the top right of the dashboard page.
Manage access button
They can give access to organizations, groups, users, but also to all users on the platform (everyone
).
Manage access window
It is important to note that a dashboard must have at least one user with admin
access level.
The OpenCTI core design relies on the concept of a knowledge graph, where you have two different kinds of object:
entities
, which have some properties
or attributes
.relationships
, which are created between two entity
nodes and have some properties
or attributes
.Example
An example would be that the entity APT28
has a relationship uses
to the malware entity Drovorub
.
To enable a unified approach in the description of threat intelligence knowledge as well as importing and exporting data, the OpenCTI data model is based on the STIX 2.1 standard. Thus we highly recommend to take a look to the STIX Introductory Walkthrough and to the different kinds of STIX relationships to get a better understanding of how OpenCTI works.
Some more important STIX naming shortcuts are:
In some cases, the model has been extended to be able to:
amplifies
, publishes
, etc.You can find below the digram of all types of entities and relationships available in OpenCTI.
"},{"location":"usage/data-model/#attributes-and-properties","title":"Attributes and properties","text":"To get a comprehensive list of available properties for a given type of entity or relationship, you can use the GraphQL playground schema available in your \"Profile > Playground\". Then you can click on schema. You can for instance search for the keyword IntrusionSet
:
One of the core concept of the OpenCTI knowledge graph is all underlying mechanisms implemented to accurately de-duplicate and consolidate (aka. upserting
) information about entities and relationships.
When an object is created in the platform, whether manually by a user or automatically by the connectors / workers chain, the platform checks if something already exist based on some properties of the object. If the object already exists, it will return the existing object and, in some cases, update it as well.
Technically, OpenCTI generates deterministic IDs based on the listed properties below to prevent duplicate (aka \"ID Contributing Properties\"). Also, it is important to note that there is a special link between name
and aliases
leading to not have entities with overlapping aliases or an alias already used in the name of another entity.
name
OR x_opencti_alias
) AND x_opencti_location_type
Attack Pattern (name
OR alias
) AND optional x_mitre_id
Campaign name
OR alias
Channel name
OR alias
City (name
OR x_opencti_alias
) AND x_opencti_location_type
Country (name
OR x_opencti_alias
) AND x_opencti_location_type
Course Of Action (name
OR alias
) AND optional x_mitre_id
Data Component name
OR alias
Data Source name
OR alias
Event name
OR alias
Feedback Case name
AND created
(date) Grouping name
AND context
Incident name
OR alias
Incident Response Case name
OR alias
Indicator pattern
OR alias
Individual (name
OR x_opencti_alias
) and identity_class
Infrastructure name
OR alias
Intrusion Set name
OR alias
Language name
OR alias
Malware name
OR alias
Malware Analysis name
OR alias
Narrative name
OR alias
Note None Observed Data name
OR alias
Opinion None Organization (name
OR x_opencti_alias
) and identity_class
Position (name
OR x_opencti_alias
) AND x_opencti_location_type
Region name
OR alias
Report name
AND published
(date) RFI Case name
AND created
(date) RFT Case name
AND created
(date) Sector (name
OR alias
) and identity_class
Task None Threat Actor name
OR alias
Tool name
OR alias
Vulnerability name
OR alias
"},{"location":"usage/deduplication/#relationships","title":"Relationships","text":"The deduplication process of relationships is based on the following criterias:
For STIX Cyber Observables, OpenCTI also generate deterministic IDs based on the STIX specification using the \"ID Contributing Properties\" defined for each type of observable.
"},{"location":"usage/deduplication/#update-behavior","title":"Update behavior","text":"In cases where an entity already exists in the platform, incoming creations can trigger updates to the existing entity's attributes.
Policy for handling entity updates
If confidence_level
of the created entity is >= (greater than or equal) to the confidence_level
of the existing entity, the attributes will be updated. Notably, the confidence_level
will also be increased with the new one.
This logic has been implemented to converge the knowledge base towards the highest confidence and quality levels for both entities and relationships.
"},{"location":"usage/enrichment/","title":"Enrichment connectors","text":"Under construction
We are doing our best to complete this page. If you want to participate, don't hesitate to join the Filigran Community on Slack or submit your pull request on the Github doc repository.
"},{"location":"usage/exploring-analysis/","title":"Analyses","text":"When you click on \"Analyses\" in the left-side bar, you see all the \"Analyses\" tabs, visible on the top bar on the left. By default, the user directly access the \"Reports\" tab, but can navigate to the other tabs as well.
From the Analyses
section, users can access the following tabs:
Reports
: See Reports as a sort of containers to detail and structure what is contained on a specific report, either from a source or write by yourself. Think of it as an Intelligence Production in OpenCTI.Groupings
: Groupings are containers, like Reports, but do not represent an Intelligence Production. They regroup Objects sharing an explicit context. For example, a Grouping might represent a set of data that, in time, given sufficient analysis, would mature to convey an incident or threat report as Report container.Malware Analyses
: As define by STIX 2.1 standard, Malware Analyses captures the metadata and results of a particular static or dynamic analysis performed on a malware instance or family.Notes
: Through this tab, you can find all the Notes that have been written in the platform, for example to add some analyst's unstructured knowledge about an Object.External references
: Intelligence is never created from nothing. External references give user a way to link sources or reference documents to any Object in the platform.Reports are one of the central component of the platform. It is from a Report
that knowledge is extracted and integrated in the platform for further navigation, analyses and exports. Always tying the information back to a report allows for the user to be able to identify the source of any piece of information in the platform at all time.
In the MITRE STIX 2.1 documentation, a Report
is defined as such :
Reports are collections of threat intelligence focused on one or more topics, such as a description of a threat actor, malware, or attack technique, including context and related details. They are used to group related threat intelligence together so that it can be published as a comprehensive cyber threat story.
As a result, a Report
object in OpenCTI is a set of attributes and metadata defining and describing a document outside the platform, which can be a threat intelligence report from a security reseearch team, a blog post, a press article a video, a conference extract, a MISP event, or any type of document and source.
When clicking on the Reports tab at the top left, you see the list of all the Reports you have access to, in respect with your allowed marking definitions. You can then search and filter on some common and specific attributes of reports.
"},{"location":"usage/exploring-analysis/#visualizing-knowledge-within-a-report","title":"Visualizing Knowledge within a Report","text":"When clicking on a Report, you land on the Overview tab. For a Report, the following tabs are accessible:
Exploring and modifying the structured Knowledge contained in a Report can be done through different lenses.
"},{"location":"usage/exploring-analysis/#graph-view","title":"Graph View","text":"In Graph view, STIX SDO are displayed as graph nodes and relationships as graph links. Nodes are colored depending of their type. Direct relationship are displayed as plain link and inferred relationships in dotted link. At the top right, you will find a serie of icons. From there you can change the current type of view. Here you can also perform global action on the Knowledge of the Report. Let's highlight 2 of them: - Suggestions: This tool suggests you some logical relationships to add between your contained Object to give more consistency to your Knowledge. - Share with an Organization: if you have designated a main Organization in the platform settings, you can here share your Report and its content with users of an other Organization. At the bottom, you have many option to manipulate the graph: - Multiple option for shaping the graph and applying forces to the nodes and links - Multiple selection options - Multiple filters, including a time range selector allowing you to see the evolution of the Knowledge within the Report. - Multiple creation and edition tools to modify the Knowledge contained in the Report.
"},{"location":"usage/exploring-analysis/#content-mapping-view","title":"Content mapping view","text":"Through this view, you can map exsisting or new Objects directly from a readable content, allowing you to quickly append structured Knowledge in your Report before refining it with relationships and details. This view is a great place to see the continuum between unstructured and structured Knowledge of a specific Intelligence Production.
"},{"location":"usage/exploring-analysis/#timeline-view","title":"Timeline view","text":"This view allows you to see the structured Knowledge chronologically. This view is really useful when the report describes an attack or a campaign that lasted some time, and the analyst payed attention to the dates. The view can be filtered and displayed relationships too.
"},{"location":"usage/exploring-analysis/#correlation-view","title":"Correlation view","text":"The correlation view is a great way to visualize and find other Reports related to your current subject of interest. This graph displays all Report related to the important nodes contained in your current Report, for example Objects like Malware or Intrusion sets.
"},{"location":"usage/exploring-analysis/#matrix-view","title":"Matrix view","text":"If your Report describes let's say an attack, a campaign, or an understanding of an Intrusion set, it should contains multiple attack patterns Objects to structure the Knowledge about the TTPs of the Threat Actor. Those attack patterns can be displayed as highlighted matrices, by default the MITRE ATT&CK Enterprise matrix. As some matrices can be huge, it can be also filtered to only display attack patterns describes in the Report.
"},{"location":"usage/exploring-analysis/#groupings","title":"Groupings","text":"Groupings are an alternative to Report for grouping Objects sharing a context without describing an Intelligence Production.
In the MITRE STIX 2.1 documentation, a Grouping
is defined as such :
A Grouping object explicitly asserts that the referenced STIX Objects have a shared context, unlike a STIX Bundle (which explicitly conveys no context). A Grouping object should not be confused with an intelligence product, which should be conveyed via a STIX Report. A STIX Grouping object might represent a set of data that, in time, given sufficient analysis, would mature to convey an incident or threat report as a STIX Report object. For example, a Grouping could be used to characterize an ongoing investigation into a security event or incident. A Grouping object could also be used to assert that the referenced STIX Objects are related to an ongoing analysis process, such as when a threat analyst is collaborating with others in their trust community to examine a series of Campaigns and Indicators.
When clicking on the Groupings tab at the top of the interface, you see the list of all the Groupings you have access to, in respect with your allowed marking definitions. You can then search and filter on some common and specific attributes of the groupings.
Clicking on a Grouping, you land on its Overview tab. For a Groupings, the following tabs are accessible: - Overview: as described here. - Knowledge: a complex tab that regroups all the structured Knowledge contained in the groupings, as for a Report, except for the Timeline view. As described here. - Entities: A table containing all SDO (Stix Domain Objects) contained in the Grouping, with search and filters available. It also display if the SDO has been added directly or through inferences with the reasonging engine - Observables: A table containing all SCO (Stix Cyber Observable) contained in the Grouping, with search and filters available. It also display if the SDO has been added directly or through inferences with the reasonging engine - Data: as described here.
"},{"location":"usage/exploring-analysis/#malware-analyses","title":"Malware Analyses","text":"Malware analyses are an important part of the Cyber Threat Intelligence, allowing an precise understanding of what and how a malware really do on the host but also how and from where it receives its command and communicates its results.
In OpenCTI, Malware Analyses can be created from enrichment connectors that will take an Observable as input and perform a scan on a online service platform to bring back results. As such, Malware Analyses can be done on File, Domain and URL.
In the MITRE STIX 2.1 documentation, a Malware Analyses
is defined as such :
Malware Analyses captures the metadata and results of a particular static or dynamic analysis performed on a malware instance or family.
When clicking on the Malware Analyses tab at the top of the interface, you see the list of all the Malware Analyses you have access to, in respect with your allowed marking definitions. You can then search and filter on some common and specific attributes of the Malware Analyses.
Clicking on a Malware Analyses, you land on its Overview tab. The following tabs are accessible: - Overview: This view contains some additions from the common Overview here. You will find here details about how the analysis have been performed, what is the global result regarding the malicioussness of the analysed artifact and all the Observables that have been found during the analysis. - Knowledge: If you Malware analysis is linked to other Objects that are not part of the analysis result, they will be displayed here. As described here. - Data: as described here. - History: as described here.
"},{"location":"usage/exploring-analysis/#notes","title":"Notes","text":"Not every Knowledge can be structured. For allowing any users to share their insights about a specific Knowledge, they can create a Note for every Object and relationship in OpenCTI they can access to. All the Notes are listed within the Analyses menu for allowing global review of this unstructured addition to the global Knowledge.
In the MITRE STIX 2.1 documentation, a Note
is defined as such :
A Note is intended to convey informative text to provide further context and/or to provide additional analysis not contained in the STIX Objects, Marking Definition objects, or Language Content objects which the Note relates to. Notes can be created by anyone (not just the original object creator).
Clicking on a Note, you land on its Overview tab. The following tabs are accessible: - Overview: as described here. - Data: as described here. - History: as described here.
"},{"location":"usage/exploring-analysis/#external-references","title":"External references","text":"Intelligence is never created from nothing. External references give user a way to link sources or reference documents to any Object in the platform. All external references are listed within the Analyses menu for accessing directly sources of the structured Knowledge.
In the MITRE STIX 2.1 documentation, a External references
is defined as such :
External references are used to describe pointers to information represented outside of STIX. For example, a Malware object could use an external reference to indicate an ID for that malware in an external database or a report could use references to represent source material.
Clicking on an External reference, you land on its Overview tab. The following tabs are accessible: - Overview: as described here.
"},{"location":"usage/exploring-arsenal/","title":"Arsenal","text":"When you click on \"Arsenal\" in the left-side bar, you access all the \"Arsenal\" tabs, visible on the top bar on the left. By default, the user directly access the \"Malware\" tab, but can navigate to the other tabs as well.
From the Arsenal
section, users can access the following tabs:
Malware
: Malware
represents any piece of code specifically designed to damage, disrupt, or gain unauthorized access to computer systems, networks, or user data.Channels
: Channels
, in the context of cybersecurity, refer to places or means through which actors disseminate information. This category is used in particular in the context of FIMI (Foreign Information Manipulation Interference). Tools
: Tools
represent legitimate, installed software or hardware applications on an operating system that can be misused by attackers for malicious purposes. (e.g. LOLBAS).Vulnerabilities
: Vulnerabilities
are weaknesses or that can be exploited by attackers to compromise the security, integrity, or availability of a computer system or network.Malware encompasses a broad category of malicious pieces of code built, deployed, and operated by intrusion set. Malware can take many forms, including viruses, worms, Trojans, ransomware, spyware, and more. These entities are created by individuals or groups, including state-nations, state-sponsored groups, corporations, or hacktivist collectives.
Use the Malware
SDO to model and track these threats comprehensively, facilitating in-depth analysis, response, and correlation with other security data.
When clicking on the Malware tab on the top left, you see the list of all the Malware you have access to, in respect with your allowed marking definitions. These malware are displayed as Cards where you can find a summary of the important Knowledge associated with each of them: description, aliases, related intrusion sets, countries and sectors they target, and labels. You can then search and filter on some common and specific attributes of Malware.
At the top right of each Card, you can click the star icon to put it as favorite. It will pin the card on top of the list. You will also be able to display all your favorite easily in your Custom Dashboards.
"},{"location":"usage/exploring-arsenal/#visualizing-knowledge-associated-with-a-malware","title":"Visualizing Knowledge associated with a Malware","text":"When clicking on an Malware
card you land on its Overview tab. For a Malware, the following tabs are accessible:
Channels
- such as forums, websites and social media platforms (e.g. Twitter, Telegram) - are mediums for disseminating news, knowledge, and messages to a broad audience. While they offer benefits like open communication and outreach, they can also be leveraged for nefarious purposes, such as spreading misinformation, coordinating cyberattacks, or promoting illegal activities.
Monitoring and managing content within Channels
aids in analyzing threats, activities, and indicators associated with various threat actors, campaigns, and intrusion sets.
When clicking on the Channels tab at the top left, you see the list of all the Channels you have access to, in respect with your allowed marking definitions. These channels are displayed in a list where you can find certain fields characterizing the entity: type of channel, labels, and dates. You can then search and filter on some common and specific attributes of Channels.
"},{"location":"usage/exploring-arsenal/#visualizing-knowledge-associated-with-a-channel","title":"Visualizing Knowledge associated with a Channel","text":"When clicking on a Channel
in the list, you land on its Overview tab. For a Channel, the following tabs are accessible:
Tools
refers to legitimate, pre-installed software applications, command-line utilities, or scripts that are present on a compromised system. These objects enable you to model and monitor the activities of these tools, which can be misused by attackers.
When clicking on the Tools
tab at the top left, you see the list of all the Tools
you have access to, in respect with your allowed marking definitions. These tools are displayed in a list where you can find certain fields characterizing the entity: labels and dates. You can then search and filter on some common and specific attributes of Tools.
When clicking on a Tool
in the list, you land on its Overview tab. For a Tool, the following tabs are accessible:
Vulnerabilities
represent weaknesses or flaws in software, hardware, configurations, or systems that can be exploited by malicious actors. This object assists in managing and tracking the organization's security posture by identifying areas that require attention and remediation, while also providing insights into associated intrusion sets, malware and campaigns where relevant.
When clicking on the Vulnerabilities
tab at the top left, you see the list of all the Vulnerabilities
you have access to, in respect with your allowed marking definitions. These vulnerabilities are displayed in a list where you can find certain fields characterizing the entity: CVSS3 severity, labels, dates and creators (in the platform). You can then search and filter on some common and specific attributes of Vulnerabilities.
When clicking on a Vulnerabilities
in the list, you land on its Overview tab. For a Vulnerability, the following tabs are accessible:
When you click on \"Cases\" in the left-side bar, you access all the \"Cases\" tabs, visible on the top bar on the left. By default, the user directly access the \"Incident Responses\" tab, but can navigate to the other tabs as well.
As Analyses, Cases
can contain other objects. This way, by adding context and results of your investigations in the case, you will be able to get an up-to-date overview of the ongoing situation, and later produce more easily an incident report.
From the Cases
section, users can access the following tabs:
Incident Responses
: This type of Cases is dedicated to the management of incidents. An Incident Response case does not represent an incident, but all the context and actions that will encompass the response to a specific incident.Request for Information
: CTI teams are often asked to provide extensive information and analysis on a specific subject, be it related to an ongoing incident or a particular trending threat. Request for Information cases allow you to store context and actions relative to this type of request and its response.Request for Takedown
: When an organization is targeted by an attack campaign, a typical response action can be to request the Takedown of elements of the attack infrastructure, for example a domain name impersonating the organization to phish its employees, or an email address used to deliver phishing content. As Takedown needs in most case to reach out to external providers and be effective quickly, it often needs specific workflows. Request for Takedown cases give you a dedicated space to manage these specific actions.Tasks
: In every case, you need tasks to be performed in order to solve it. The Tasks tab allows you to review all created tasks to quickly see past due date, or quickly see every task assigned to a specific user.Feedbacks
: If you use your platform to interact with other teams and provide them CTI Knowledge, some users may want to give you feedback about it. Those feedbacks can easily be considered as another type of case to solve, as it will often refer to Knowledge inconsistency or gaps.Incident responses, Request for Information & Request for Takedown cases are an important part of the case management system in OpenCTI. Here, you can organize the work of your team to respond to cybersecurity situations. You can also give context to the team and other users on the platform about the situation and actions (to be) taken.
To manage the situation, you can issue Tasks
and assign them to users in the platform, by directly creating a Task or by applying a Case template that will append a list of predefined tasks.
To bring context, you can use your Case as a container (like Reports or Groupings), allowing you to add any Knowledge from your platform in it. You can also use this possibility to trace your investigation, your Case playing the role of an Incident report. You will find more information about case management here.
Incident Response, Request for Information & Request for Takedown are not STIX 2.1 Objects.
When clicking on the Incident Response, Request for Information & Request for Takedown tabs at the top, you see the list of all the Cases you have access to, in respect with your allowed marking definitions. You can then search and filter on some common and specific attributes.
"},{"location":"usage/exploring-cases/#visualizing-knowledge-within-an-incident-response-request-for-information-request-for-takedown","title":"Visualizing Knowledge within an Incident Response, Request for Information & Request for Takedown","text":"When clicking on an Incident Response, Request for Information or Request for Takedown, you land on the Overview tab. The following tabs are accessible:
Exploring and modifying the structured Knowledge contained in a Case can be done through different lenses.
"},{"location":"usage/exploring-cases/#graph-view","title":"Graph View","text":"In Graph view, STIX SDO are displayed as graph nodes and relationships as graph links. Nodes are colored depending on their type. Direct relationship are displayed as plain link and inferred relationships in dotted link. At the top right, you will find a series of icons. From there you can change the current type of view. Here you can also perform global action on the Knowledge of the Case. Let's highlight 2 of them:
Through this view, you can map existing or new Objects directly from a readable content, allowing you to quickly append structured Knowledge in your Case before refining it with relationships and details. This view is a great place to see the continuum between unstructured and structured Knowledge.
"},{"location":"usage/exploring-cases/#timeline-view","title":"Timeline view","text":"This view allows you to see the structured Knowledge chronologically. This view is particularly useful in the context of a Case, allowing you to see the chain of events, either from the attack perspectives, the defense perspectives or both. The view can be filtered and displayed relationships too.
"},{"location":"usage/exploring-cases/#matrix-view","title":"Matrix view","text":"If your Case contains attack patterns, you will be able to visualize them in a Matrix view.
"},{"location":"usage/exploring-cases/#tasks","title":"Tasks","text":"Tasks are actions to be performed in the context of a Case (Incident Response, Request for Information, Request for Takedown). Usually, a task is assigned to a user, but important tasks may involve more participants.
When clicking on the Tasks tab at the top of the interface, you see the list of all the Tasks you have access to, in respect with your allowed marking definitions. You can then search and filter on some common and specific attributes of the tasks.
Clicking on a Task, you land on its Overview tab. For a Tasks, the following tabs are accessible: - Overview: as described here. - Data: as described here. - History: as described here.
"},{"location":"usage/exploring-cases/#feedbacks","title":"Feedbacks","text":"When a user fill a feedback form from its Profile/Feedback menu, it will then be accessible here.
This feature gives the opportunity to engage with other users of your platform and to respond directly to their concern about it or the Knowledge, without the need of third party software.
Clicking on a Feedback, you land on its Overview tab. For a Feedback, the following tabs are accessible: - Overview: as described here. - Content: as described here. - Data: as described here. - History: as described here.
"},{"location":"usage/exploring-entities/","title":"Entities","text":"OpenCTI's Entities objects provides a comprehensive framework for modeling various targets and attack victims within your threat intelligence data. With five distinct Entity object types, you can represent sectors, events, organizations, systems, and individuals. This robust classification empowers you to contextualize threats effectively, enhancing the depth and precision of your analysis.
When you click on \"Entities\" in the left-side bar, you access all the \"Entities\" tabs, visible on the top bar on the left. By default, the user directly access the \"Sectors\" tab, but can navigate to the other tabs as well.
From the Entities
section, users can access the following tabs:
Sectors
: areas of activity.Events
: event in the real world.Organizations
: groups with specific aims such as companies and government entities.Systems
: technologies such as platforms and software.Individuals
: real persons.Sectors represent specific domains of activity, defining areas such as energy, government, health, finance, and more. Utilize sectors to categorize targeted industries or sectors of interest, providing valuable context for threat intelligence analysis within distinct areas of the economy.
When clicking on the Sectors tab at the top left, you see the list of all the Sectors you have access to, in respect with your allowed marking definitions.
"},{"location":"usage/exploring-entities/#visualizing-knowledge-associated-with-a-sector","title":"Visualizing Knowledge associated with a Sector","text":"When clicking on a Sector
in the list, you land on its Overview tab. For a Sector, the following tabs are accessible:
Sightings
relationships corresponding to events in which an Indicator
(IP, domain name, url, etc.) is sighted in the Sector.Events encompass occurrences like international sports events, summits (e.g., G20), trials, conferences, or any significant happening in the real world. By modeling events, you can analyze threats associated with specific occurrences, allowing for targeted investigations surrounding high-profile incidents.
When clicking on the Events tab at the top left, you see the list of all the Events you have access to, in respect with your allowed marking definitions.
"},{"location":"usage/exploring-entities/#visualizing-knowledge-associated-with-an-event","title":"Visualizing Knowledge associated with an Event","text":"When clicking on an Event
in the list, you land on its Overview tab. For an Event, the following tabs are accessible:
Sightings
relationships corresponding to events in which an Indicator
(IP, domain name, url, etc.) is sighted during an attack against the Event.Organizations include diverse entities such as companies, government bodies, associations, non-profits, and other groups with specific aims. Modeling organizations enables you to understand the threat landscape concerning various entities, facilitating investigations into cyber-espionage, data breaches, or other malicious activities targeting specific groups.
When clicking on the Organizations tab at the top left, you see the list of all the Organizations you have access to, in respect with your allowed marking definitions.
"},{"location":"usage/exploring-entities/#visualizing-knowledge-associated-with-an-organization","title":"Visualizing Knowledge associated with an Organization","text":"When clicking on an Organization
in the list, you land on its Overview tab. For an Organization, the following tabs are accessible:
Sightings
relationships corresponding to events in which an Indicator
(IP, domain name, url, etc.) is sighted in the Organization.Furthermore, an Organization can be observed from an \"Author\" perspective. It is possible to change this viewpoint to the right of the entity name, using the \"Display as\" drop-down menu (see screenshot below). This different perspective is accessible in the Overview, Knowledge and Analyses tabs. When switched to \"Author\" mode, the observed data pertains to the entity's description as an author within the platform:
Report
, Groupings
) and Cases (Incident response
, Request for Information
, Request for Takedown
) for which the Organization is the author.Systems represent software applications, platforms, frameworks, or specific tools like WordPress, VirtualBox, Firefox, Python, etc. Modeling systems allows you to focus on threats related to specific software or technology, aiding in vulnerability assessments, patch management, and securing critical applications.
When clicking on the Systems tab at the top left, you see the list of all the Systems you have access to, in respect with your allowed marking definitions.
"},{"location":"usage/exploring-entities/#visualizing-knowledge-associated-with-a-system","title":"Visualizing Knowledge associated with a System","text":"When clicking on a System
in the list, you land on its Overview tab. For a System, the following tabs are accessible:
Sightings
relationships corresponding to events in which an Indicator
(IP, domain name, url, etc.) is sighted in the System.Furthermore, a System can be observed from an \"Author\" perspective. It is possible to change this viewpoint to the right of the entity name, using the \"Display as\" drop-down menu (see screenshot below). This different perspective is accessible in the Overview, Knowledge and Analyses tabs. When switched to \"Author\" mode, the observed data pertains to the entity's description as an author within the platform:
Report
, Groupings
) and Cases (Incident response
, Request for Information
, Request for Takedown
) for which the System is the author.Individuals represent specific persons relevant to your threat intelligence analysis. This category includes targeted individuals, or influential figures in various fields. Modeling individuals enables you to analyze threats related to specific people, enhancing investigations into cyber-stalking, impersonation, or other targeted attacks.
When clicking on the Individuals tab at the top left, you see the list of all the Individuals you have access to, in respect with your allowed marking definitions.
"},{"location":"usage/exploring-entities/#visualizing-knowledge-associated-with-an-individual","title":"Visualizing Knowledge associated with an Individual","text":"When clicking on an Individual
in the list, you land on its Overview tab. For an Individual, the following tabs are accessible:
Sightings
relationships corresponding to events in which an Indicator
(IP, domain name, url, etc.) is sighted in the Individual.Furthermore, an Individual can be observed from an \"Author\" perspective. It is possible to change this viewpoint to the right of the entity name, using the \"Display as\" drop-down menu (see screenshot below). This different perspective is accessible in the Overview, Knowledge and Analyses tabs. When switched to \"Author\" mode, the observed data pertains to the entity's description as an author within the platform:
Report
, Groupings
) and Cases (Incident response
, Request for Information
, Request for Takedown
) for which the Individual is the author.When you click on \"Events\" in the left-side bar, you access all the \"Events\" tabs, visible on the top bar on the left. By default, the user directly access the \"Incidents\" tab, but can navigate to the other tabs as well.
From the Events
section, users can access the following tabs:
Incidents
: In OpenCTI, Incidents
correspond to a negative event happening on an information system. This can include a cyberattack (intrusion, phishing, etc.), a consolidated security alert generated by a SIEM or EDR that need to be qualified, and so on. It can also refer to an information warfare attack in the context of countering disinformation.Sightings
: Sightings
correspond to the event in which an Observable
(IP, domain name, certificate, etc.) is detected by or within an information system, an individual or an organization. Most often, this corresponds to a security event transmitted by a SIEM or an EDR.Observed Data
: Observed Data
has been added in OpenCTI by compliance with the STIX 2.1 standard. You can see it has a pseudo-container that contains Observables, like a line of firewall log for example. Currently, it is rarely used.Incidents usually represents negative events impacting resources you want to protect, but local definitions can vary a lot, from a simple security events send by a SIEM to a massive scale supply chain attack impacting a whole activity sector.
In the MITRE STIX 2.1, the Incident
SDO has not yet been finalize and is the object of important work as part of a forthcoming STIX Extension.
When clicking on the Incidents tab at the top left, you see the list of all the Incidents you have access to, in respect with your allowed marking definitions.
"},{"location":"usage/exploring-events/#visualizing-knowledge-associated-with-an-incident","title":"Visualizing Knowledge associated with an Incident","text":"When clicking on an Incident
in the list, you land on its Overview tab. For an Incident, the following tabs are accessible:
The Sightings
correspond to events in which an Observable
(IP, domain name, url, etc.) is detected by or within an information system, an individual or an organization. Most often, this corresponds to a security event transmitted by a SIEM or EDR.
In OpenCTI, as we are in a cybersecurity context, Sightings
are associated with Indicators
of Compromise (IoC) and the notion of \"True positive\" and \"False positive\".
It is important to note that Sightings are a type of relationship (not a STIX SDO or STIX SCO), between an Observable and an Entities or Locations.
When clicking on the Sightings tab at the top left, you see the list of all the Sightings you have access to, in respect with your allowed marking definitions.
"},{"location":"usage/exploring-events/#visualizing-knowledge-associated-with-a-sighting","title":"Visualizing Knowledge associated with a Sighting","text":"When clicking on a Sighting
in the list, you land on its Overview tab. As other relationships in the platform, Sighting's overview displays common related metadata, containers, external references, notes and entities linked by the relationship.
In addition, this overview displays: - Qualification : if the Sighting is a True Positive or a False Positive - Count : number of times the event has been seen
"},{"location":"usage/exploring-events/#observed-data","title":"Observed Data","text":""},{"location":"usage/exploring-events/#general-presentation_2","title":"General presentation","text":"Observed Data
correspond to an extract from a log that contains Observables.
In the MITRE STIX 2.1, the Observed Data
SDO is defined as such:
Observed Data conveys information about cybersecurity related entities such as files, systems, and networks using the STIX Cyber-observable Objects (SCOs). For example, Observed Data can capture information about an IP address, a network connection, a file, or a registry key. Observed Data is not an intelligence assertion, it is simply the raw information without any context for what it means.
When clicking on the Observed Data
tab at the top left, you see the list of all the Observed Data
you have access to, in respect with your allowed marking definitions.
When clicking on an Observed Data
in the list, you land on its Overview tab. The following tabs are accessible:
OpenCTI's Locations objects provides a comprehensive framework for representing various geographic entities within your threat intelligence data. With five distinct Location object types, you can precisely define regions, countries, areas, cities, and specific positions. This robust classification empowers you to contextualize threats geographically, enhancing the depth and accuracy of your analysis.
When you click on \"Locations\" in the left-side bar, you access all the \"Locations\" tabs, visible on the top bar on the left. By default, the user directly access the \"Regions\" tab, but can navigate to the other tabs as well.
From the Locations
section, users can access the following tabs:
Regions
: very large geographical territories, such as a continent.Countries
: the world's countries.Areas
: more or less extensive geographical areas and often not having a very defined limitCities
: the world's cities.Positions
: very precise positions on the globe.Regions encapsulate broader geographical territories, often representing continents or significant parts of continents. Examples include EMEA (Europe, Middle East, and Africa), Asia, Western Europe, and North America. Utilize regions to categorize large geopolitical areas and gain macro-level insights into threat patterns.
When clicking on the Regions tab at the top left, you see the list of all the Regions you have access to, in respect with your allowed marking definitions.
"},{"location":"usage/exploring-locations/#visualizing-knowledge-associated-with-a-region","title":"Visualizing Knowledge associated with a Region","text":"When clicking on a Region
in the list, you land on its Overview tab. For a Region, the following tabs are accessible:
Details
section but a map locating the Region.Sightings
relationships corresponding to events in which an Indicator
(IP, domain name, url, etc.) is sighted in a Region.Countries represent individual nations across the world. With this object type, you can specify detailed information about a particular country, enabling precise localization of threat intelligence data. Countries are fundamental entities in geopolitical analysis, offering a focused view of activities within national borders.
When clicking on the Countries tab at the top left, you see the list of all the Countries you have access to, in respect with your allowed marking definitions.
"},{"location":"usage/exploring-locations/#visualizing-knowledge-associated-with-a-country","title":"Visualizing Knowledge associated with a Country","text":"When clicking on a Country
in the list, you land on its Overview tab. For a Country, the following tabs are accessible:
Details
section but a map locating the Country.Sightings
relationships corresponding to events in which an Indicator
(IP, domain name, url, etc.) is sighted in a Country.Areas define specific geographical regions of interest, such as the Persian Gulf, the Balkans, or the Caucasus. Use areas to identify unique zones with distinct geopolitical, cultural, or strategic significance. This object type facilitates nuanced analysis of threats within defined geographic contexts.
When clicking on the Areas tab at the top left, you see the list of all the Areas you have access to, in respect with your allowed marking definitions.
"},{"location":"usage/exploring-locations/#visualizing-knowledge-associated-with-an-area","title":"Visualizing Knowledge associated with an Area","text":"When clicking on an Area
in the list, you land on its Overview tab. For an Area, the following tabs are accessible:
Details
section but a map locating the Area.Sightings
relationships corresponding to events in which an Indicator
(IP, domain name, url, etc.) is sighted in an Area.Cities provide granular information about urban centers worldwide. From major metropolises to smaller towns, cities are crucial in understanding localized threat activities. With this object type, you can pinpoint threats at the urban level, aiding in tactical threat assessments and response planning.
When clicking on the Cities tab at the top left, you see the list of all the Cities you have access to, in respect with your allowed marking definitions.
"},{"location":"usage/exploring-locations/#visualizing-knowledge-associated-with-a-city","title":"Visualizing Knowledge associated with a City","text":"When clicking on a City
in the list, you land on its Overview tab. For a City, the following tabs are accessible:
Details
section but a map locating the City.Sightings
relationships corresponding to events in which an Indicator
(IP, domain name, url, etc.) is sighted in a City.Positions represent highly precise geographical points, such as monuments, buildings, or specific event locations. This object type allows you to define exact coordinates, enabling accurate mapping of events or incidents. Positions enhance the granularity of your threat intelligence data, facilitating precise geospatial analysis.
When clicking on the Positions tab at the top left, you see the list of all the Positions you have access to, in respect with your allowed marking definitions.
"},{"location":"usage/exploring-locations/#visualizing-knowledge-associated-with-a-position","title":"Visualizing Knowledge associated with a Position","text":"When clicking on a Position
in the list, you land on its Overview tab. For a Position, the following tabs are accessible:
Sightings
relationships corresponding to events in which an Indicator
(IP, domain name, url, etc.) is sighted at a Position.Under construction
We are doing our best to complete this page. If you want to participate, don't hesitate to join the Filigran Community on Slack or submit your pull request on the Github doc repository.
"},{"location":"usage/exploring-techniques/","title":"Techniques","text":"When you click on \"Techniques\" in the left-side bar, you access all the \"Techniques\" tabs, visible on the top bar on the left. By default, the user directly access the \"Attack pattern\" tab, but can navigate to the other tabs as well.
From the Techniques
section, users can access the following tabs:
Attack pattern
: attacks pattern used by the threat actors to perform their attacks. By default, OpenCTI is provisionned with attack patterns from MITRE ATT&CK matrices (for CTI) and DISARM matrix (for FIMI).Narratives
: In OpenCTI, narratives used by threat actors can be represented and linked to other Objects. Narratives are mainly used in the context of disinformation campaigns where it is important to trace which narratives have been and are still used by threat actors.Courses of action
: A Course of Action is an action taken either to prevent an attack or to respond to an attack that is in progress. It may describe technical, automatable responses (applying patches, reconfiguring firewalls) but can also describe higher level actions like employee training or policy changes. For example, a course of action to mitigate a vulnerability could describe applying the patch that fixes it.Data sources
: Data sources represent the various subjects/topics of information that can be collected by sensors/logs. Data sources also include data components, Data components
: Data components identify specific properties/values of a data source relevant to detecting a given ATT&CK technique or sub-technique.Attacks pattern used by the threat actors to perform their attacks. By default, OpenCTI is provisionned with attack patterns from MITRE ATT&CK matrices and CAPEC (for CTI) and DISARM matrix (for FIMI).
In the MITRE STIX 2.1 documentation, an Attack pattern
is defined as such :
Attack Patterns are a type of TTP that describe ways that adversaries attempt to compromise targets. Attack Patterns are used to help categorize attacks, generalize specific attacks to the patterns that they follow, and provide detailed information about how attacks are performed. An example of an attack pattern is \"spear phishing\": a common type of attack where an attacker sends a carefully crafted e-mail message to a party with the intent of getting them to click a link or open an attachment to deliver malware. Attack Patterns can also be more specific; spear phishing as practiced by a particular threat actor (e.g., they might generally say that the target won a contest) can also be an Attack Pattern.
When clicking on the Attack pattern tab at the top left, you access the list of all the attack pattern you have access too, in respect with your allowed marking definitions. You can then search and filter on some common and specific attributes of attack patterns.
"},{"location":"usage/exploring-techniques/#visualizing-knowledge-associated-with-an-attack-pattern","title":"Visualizing Knowledge associated with an Attack pattern","text":"When clicking on an Attack pattern, you land on its Overview tab. For an Attack pattern, the following tabs are accessible:
Overview: Overview of Attack pattern is a bit different as the usual described here. The \"Details\" box is more structured and contains information about:
parent or subtechniques (as in the MITRE ATT&CK matrices),
In OpenCTI, narratives used by threat actors can be represented and linked to other Objects. Narratives are mainly used in the context of disinformation campaigns where it is important to trace which narratives have been and are still used by threat actors.
An example of Narrative can be \"The country A is weak and corrupted\" or \"The ongoing operation aims to free people\".
Narrative can be a mean in the context of a more broad attack or the goal of the operation, a vision to impose.
When clicking on the Narrative tab at the top left, you access the list of all the Narratives you have access too, in respect with your allowed marking definitions. You can then search and filter on some common and specific attributes of narratives.
"},{"location":"usage/exploring-techniques/#visualizing-knowledge-associated-with-a-narrative","title":"Visualizing Knowledge associated with a Narrative","text":"When clicking on a Narrative, you land on its Overview tab. For a Narrative, the following tabs are accessible:
In the MITRE STIX 2.1 documentation, an Course of action
is defined as such :
A Course of Action is an action taken either to prevent an attack or to respond to an attack that is in progress. It may describe technical, automatable responses (applying patches, reconfiguring firewalls) but can also describe higher level actions like employee training or policy changes. For example, a course of action to mitigate a vulnerability could describe applying the patch that fixes it.
When clicking on the Courses of action
tab at the top left, you access the list of all the Courses of action you have access too, in respect with your allowed marking definitions. You can then search and filter on some common and specific attributes of course of action.
When clicking on a Course of Action
, you land on its Overview tab. For a Course of action, the following tabs are accessible:
In the MITRE ATT&CK documentation, Data sources
are defined as such :
Data sources represent the various subjects/topics of information that can be collected by sensors/logs. Data sources also include data components, which identify specific properties/values of a data source relevant to detecting a given ATT&CK technique or sub-technique.
"},{"location":"usage/exploring-techniques/#visualizing-knowledge-associated-with-a-data-source-or-a-data-components","title":"Visualizing Knowledge associated with a Data source or a Data components","text":"When clicking on a Data source
or a Data component
, you land on its Overview tab. For a Course of action, the following tabs are accessible:
When you click on \"Threats\" in the left-side bar, you access all the \"Threats\" tabs, visible on the top bar on the left. By default, the user directly access the \"Threat Actor (Group)\" tab, but can navigate to the other tabs as well.
From the Threats
section, users can access the following tabs:
Threat actors (Group)
: Threat actor (Group) represents a physical group of attackers operating an Intrusion set, using malware and attack infrastructure, etc.Threat actors (Indvidual)
: Threat actor (Individual) represents a real attacker that can be described by physical and personal attributes and motivations. Threat actor (Individual) operates Intrusion set, uses malware and infrastructure, etc.Intrusion sets
: Intrusion set is an important concept in Cyber Threat Intelligence field. It is a consistent set of technical and non-technical elements corresponding of what, how and why a Threat actor acts. it is particularly useful for associating multiple attacks and malicious actions to a defined Threat, even without sufficient information regarding who did them. Often, with you understanding of the threat growing, you will link an Intrusion set to a Threat actor (either a Group or an Individual).Campaigns
: Campaign represents a series of attacks taking place in a certain period of time and/or targeting a consistent subset of Organization/Individual.Threat actors are the humans who are building, deploying and operating intrusion sets. A threat actor can be an single individual or a group of attackers (who may be composed of individuals). A group of attackers may be a state-nation, a state-sponsored group, a corporation, a group of hacktivists, etc.
Beware, groups of attackers might be modelled as \"Intrusion sets\" in feeds, as there is sometimes a misunderstanding in the industry between group of people and the technical/operational intrusion set they operate.
When clicking on the Threat actor (Group or Individual) tabs at the top left, you see the list of all the groups of Threat actors or Individual Threat actors you have access to, in respect with your allowed marking definitions. These groups or individual are displayed as Cards where you can find a summary of the important Knowledge associated with each of them: description, aliases, malware they used, countries and industries they target, labels. You can then search and filter on some common and specific attributes of Threat actors.
At the top right of each Card, you can click the star icon to put it as favorite. It will pin the card on top of the list. You will also be able to display all your favorite easily in your Custom Dashboards.
"},{"location":"usage/exploring-threats/#demographic-and-biographic-information","title":"Demographic and Biographic Information","text":"Individual Threat actors have unique properties to represent demographic and biographic information. Currently tracked demographics include their countries of residence, citizenships, date of birth, gender, and more.
Biographic information includes their eye and hair color, as well as known heights and weights.
An Individual Threat actor can also be tracked as employed by an Organization or a Threat Actor group. This relationship can be set under the knowledge tab.
"},{"location":"usage/exploring-threats/#visualizing-knowledge-associated-with-a-threat-actor","title":"Visualizing Knowledge associated with a Threat actor","text":"When clicking on a Threat actor Card, you land on its Overview tab. For a Threat actor, the following tabs are accessible:
An intrusion set is a consistent group of technical elements such as \"tactics, technics and procedures\" (TTP), tools, malware and infrastructure used by a threat actor against one or a number of victims who are usually sharing some characteristics (field of activity, country or region) to reach a similar goal whoever the victim is. The intrusion set may be deployed once or several times and may evolve with time. Several intrusion sets may be linked to one threat actor. All the entities described below may be linked to one intrusion set. There are many debates in the Threat Intelligence community on how to define an intrusion set and how to distinguish several intrusion sets with regards to:
As OpenCTI is very customizable, each organization or individual may use these categories as they wish. Instead, it is also possible to use the import feed for the choice of categories.
When clicking on the Intrusion set tab on the top left, you see the list of all the Intrusion sets you have access to, in respect with your allowed marking definitions. These intrusion sets are displayed as Cards where you can find a summary of the important Knowledge associated with each of them: description, aliases, malware they used, countries and industries they target, labels. You can then search and filter on some common and specific attributes of Intrusion set.
At the top right of each Card, you can click the star icon to put it as favorite. It will pin the card on top of the list. You will also be able to display all your favorite easily in your Custom Dashboards.
"},{"location":"usage/exploring-threats/#visualizing-knowledge-associated-with-an-intrusion-set","title":"Visualizing Knowledge associated with an Intrusion set","text":"When clicking on an Intrusion set Card, you land on its Overview tab. The following tabs are accessible:
A campaign can be defined as \"a series of malicious activities or attacks (sometimes called a \"wave of attacks\") taking place within a limited period of time, against a defined group of victims, associated to a similar intrusion set and characterized by the use of one or several identical malware towards the various victims and common TTPs\". However, a campaign is an investigation element and may not be widely recognized. Thus, a provider might define a series of attacks as a campaign and another as an intrusion set. Campaigns can be attributed to an Intrusion set.
When clicking on the Campaign tab on the top left, you see the list of all the Campaigns you have access to, in respect with your allowed marking definitions. These campaigns are displayed as Cards where you can find a summary of the important Knowledge associated with each of them: description, aliases, malware used, countries and industries they target, labels. You can then search and filter on some common and specific attributes of Campaigns.
At the top right of each Card, you can click the star icon to put it as favorite. It will pin the card on top of the list. You will also be able to display all your favorite easily in your Custom Dashboards.
"},{"location":"usage/exploring-threats/#visualizing-knowledge-associated-with-a-campaign","title":"Visualizing Knowledge associated with a Campaign","text":"When clicking on an Campaign Card, you land on its Overview tab. The following tabs are accessible:
Under construction
We are doing our best to complete this page. If you want to participate, don't hesitate to join the Filigran Community on Slack or submit your pull request on the Github doc repository.
"},{"location":"usage/export-structured/","title":"Export in structured format","text":"Under construction
We are doing our best to complete this page. If you want to participate, don't hesitate to join the Filigran Community on Slack or submit your pull request on the Github doc repository.
"},{"location":"usage/feeds/","title":"Native feeds","text":""},{"location":"usage/feeds/#live-streams","title":"Live streams","text":""},{"location":"usage/feeds/#introduction","title":"Introduction","text":"The best way to consume OpenCTI data, whether it is through a stream connector or within another OpenCTI instance, is to use the live streams. Live streams are like TAXII collection (ie. serving STIX 2.1 bundles) but under steroids. This means that live streams are supporting:
To better understand how live streams are working, let's take a few examples, from simple to complex.
Given a live stream with filters Entity type: Indicator AND
Label: detection. Let's see what happen with an indicator with:
TLP:GREEN
Crowdstrike
indicates
to the malware Emotet
resolve-dependencies=false
) Result in stream (resolve-dependencies=true
) 1. Create an indicator Nothing Nothing 2. Add the label detection
Create TLP:GREEN
, create CrowdStrike
, create the indicator Create TLP:GREEN
, create CrowdStrike
, create the malware Emotet
, create the indicator, create the relationship indicates
3. Remove the label detection
Delete the indicator Delete the indicator 4. Add the label detection
Create the indicator Create the indicator, create the relationship indicates
5. Delete the indicator Delete the indicator Delete the indicator"},{"location":"usage/feeds/#taxii-collections","title":"TAXII Collections","text":"OpenCTI has an embedded TAXII API endpoint which provides valid STIX 2.1 bundles. If you wish to know more about the TAXII standard, please read the official introduction.
In OpenCTI you can create as many TAXII 2.1 collections as needed. Each of them can have specific filters to publish only a subset of the platform overall knowledge (specific types of entities, labels, marking definitions, etc.).
After creating a new collection, every systems with a proper access token can consume the collection using different kinds of authentication (basic, bearer, etc.)
As when using the GraphQL API, TAXII 2.1 collections have a classic pagination system that should be handled by the consumer. Also, it's important to understand that element dependencies (nested IDs) inside the collection are not always contained/resolved in the bundle, so consistency needs to be handled at the client level.
"},{"location":"usage/feeds/#csv-feeds","title":"CSV feeds","text":"OpenCTI is able to publish data in CSV feeds on a rolling period.
"},{"location":"usage/getting-started/","title":"Getting started","text":"This guide aims to give you a full overview of the OpenCTI features and workflows. The platform can be used in various contexts to handle threats management use cases from a technical to a more strategic level. OpenCTI has been designed as a knowledge graph, taking inputs (threat intelligence feeds, sightings & alerts, vulnerabilities, assets, artifacts, etc.) and generating outputs based on built-in capabilities and / or connectors.
Here are some examples of use cases:
The welcome gives any visitor on the OpenCTI platform an outlook on the live of the platform. It can be replaced by a custom dashboard, created by a user (or the default dashboard in a role, a group or an organization).
"},{"location":"usage/getting-started/#indicators-in-the-dashboard","title":"Indicators in the dashboard","text":""},{"location":"usage/getting-started/#numbers","title":"Numbers","text":"Component Description Total entities Number of entities (threat actor
, intrusion set
, indicator
, etc.). Total relationships Number of relationships (targets
, uses
, indicates
, etc.). Total reports Number of reports. Total observables Number of observables (IPv4-Addr
, File
, etc.)."},{"location":"usage/getting-started/#charts-lists","title":"Charts & lists","text":"Component Description Top labels Top labels given to entities during the last 3 months. Ingested entities Number of entities ingested by month. Top 10 active entities List of the entities with the greatest number of relations over the last 3 months. Targeted countries Intensity of the targeting tied to the number of relations targets
for a given country. Observable distribution Distribution of the number of observables by type. Last ingested reports Last reports ingested in the platform."},{"location":"usage/import-automated/","title":"Automated import","text":"Automated imports in OpenCTI streamline the process of data ingestion, allowing users to effortlessly bring in valuable intelligence from diverse sources. This page focuses on the automated methods of importing data, which serve as bridges between OpenCTI and diverse external systems, formatting it into a STIX bundle, and importing it into the OpenCTI platform.
"},{"location":"usage/import-automated/#connectors","title":"Connectors","text":"Connectors in OpenCTI serve as dynamic gateways, facilitating the import of data from a wide array of sources and systems. Every connector is designed to handle specific data types and structures of the source, allowing OpenCTI to efficiently ingest the data.
"},{"location":"usage/import-automated/#connector-behaviors","title":"Connector behaviors","text":"The behavior of each connector is defined by its development, determining the types of data it imports and its configuration options. This flexibility allows users to customize the import process to their specific needs, ensuring a seamless and personalized data integration experience.
The level of configuration granularity regarding the imported data type varies with each connector. Nevertheless, connectors empower users to specify the date from which they wish to fetch data. This capability is particularly useful during the initial activation of a connector, enabling the retrieval of historical data. Following this, the connector operates in real-time, continuously importing new data from the source.
"},{"location":"usage/import-automated/#connector-ecosystem","title":"Connector Ecosystem","text":"OpenCTI's connector ecosystem covers a broad spectrum of sources, enhancing the platform's capability to integrate data from various contexts, from threat intelligence providers to specialized databases. The list of available connectors can be found in our connectors catalog. Connectors are categorized into three types: import connectors (the focus here), enrichment connectors, and stream consumers. Further documentation on connectors is available on the dedicated documentation page.
In summary, automated imports through connectors empower OpenCTI users with a scalable, efficient, and customizable mechanism for data ingestion, ensuring that the platform remains enriched with the latest and most relevant intelligence.
"},{"location":"usage/import-automated/#native-automated-import","title":"Native automated import","text":"In OpenCTI, the \"Data > Ingestion\" section provides users with built-in functions for automated data import. These functions are designed for specific purposes and can be configured to seamlessly ingest data into the platform. Here, we'll explore the configuration process for the three built-in functions: Live Streams, TAXII Feeds, and RSS Feeds.
"},{"location":"usage/import-automated/#live-streams","title":"Live streams","text":"Live Streams enable users to consume data from another OpenCTI platform, fostering collaborative intelligence sharing. Here's a step-by-step guide to configure Live streams synchroniser:
https://[domain]
; don't include the path).Additional configuration options:
TAXII Feeds in OpenCTI provide a robust mechanism for ingesting TAXII collections from TAXII servers or other OpenCTI instances. Configuring TAXII ingester involves specifying essential details to seamlessly integrate threat intelligence data. Here's a step-by-step guide to configure TAXII ingesters:
https://[domain]/taxii2/root
.426e3acb-db50-4118-be7e-648fab67c16c
.Additional configuration options:
RSS Feeds functionality enables users to seamlessly ingest items in report form from specified RSS feeds. Configuring RSS Feeds involves providing essential details and selecting preferences to tailor the import process. Here's a step-by-step guide to configure RSS ingesters:
Additional configuration options:
Users can streamline the data ingestion process using various automated import capabilities. Each method proves beneficial in specific circumstances.
By leveraging these automated import functionalities, OpenCTI users can build a comprehensive, up-to-date threat intelligence database. The platform's adaptability and user-friendly configuration options ensure that intelligence workflows remain agile, scalable, and tailored to the unique needs of each organization.
"},{"location":"usage/import-files/","title":"Import from files","text":""},{"location":"usage/import-files/#import-mechanisms","title":"Import mechanisms","text":"The platform provides a seamless process for automatically parsing data from various file formats. This capability is facilitated by two distinct mechanisms:
"},{"location":"usage/import-files/#file-import-connectors","title":"File import connectors","text":"Currently, there are two connectors designed for importing files and automatically identifying entities.
ImportFileStix
: Designed to handle STIX-structured files (json or xml format).ImportDocument
: Versatile connector supporting an array of file formats, including pdf, text, html, and markdown.The CSV mapper is a tailored functionality to facilitate the import of data stored in CSV files. For more in-depth information on using CSV mappers, refer to the CSV Mappers documentation page.
"},{"location":"usage/import-files/#usage","title":"Usage","text":""},{"location":"usage/import-files/#locations","title":"Locations","text":"Both mechanisms can be employed wherever file uploads are possible. This includes the \"Data\" tabs of all entities and the dedicated panel named \"Data import and analyst workbenches\" located in the top right-hand corner (database logo with a small gear). Importing files from these two locations is not entirely equal; refer to the \"Relationship handling from entity's Data tab\" section below for details on this matter.
"},{"location":"usage/import-files/#entity-identification-process","title":"Entity identification process","text":"For ImportDocument
connector, the identification process involves searching for existing entities in the platform and scanning the document for relevant information. In additions, the connector use regular expressions (regex) to detect IP addresses and domains within the document.
As for the ImportFileStix
connector and the CSV mappers, there is no identification mechanism. The imported data will be, respectively, the data defined in the STIX bundle or according to the configuration of the CSV mapper used.
It's essential to note that CSV mappers operate differently from other import mechanisms. Unlike connectors, CSV mappers do not generate workbenches. Instead, the data identified by CSV mappers is imported directly into the platform without an intermediary workbench stage.
"},{"location":"usage/import-files/#relationship-handling-from-entitys-data-tab","title":"Relationship handling from entity's \"Data\" tab","text":"When importing a document directly from an entity's \"Data\" tab, there can be an automatic addition of relationships between the objects identified by connectors and the entity in focus. The process differs depending on the type of entity in which the import occurs:
Related to
relationships between the Observables and the entity are automatically added to the workbench and created after validation of this one.Expanding the scope of file imports, users can seamlessly add files in the Content
tab of Analyses or Cases. In this scenario, the file is directly added as an attachment without utilizing an import mechanism.
In order to initiate file imports, users must possess the requisite capability: \"Upload knowledge files.\" This capability ensures that only authorized users can contribute and manage knowledge files within the OpenCTI platform, maintaining a controlled and secure environment for data uploads.
Deprecation warning
Using the ImportDocument
connector to parse CSV file is now disallowed as it produces inconsistent results. Please configure and use CSV mappers dedicated to your specific CSV content for a reliable parsing. CSV mappers can be created and configured in the administration interface.
OpenCTI enforces strict rules to determine the period during which an indicator is effective for detection. This period is defined by the valid_from
and valid_until
dates. In the future, all along this life, the indicator score
will decrease according to a customizable algorithm.
After the indicator fully expires, the object is marked as revoked
and the detection
field is automatically set to false
. Here, we outline how these dates are calculated within the OpenCTI platform. This documentation will be enhanced also for the score impact.
If a data source provides valid_from
and valid_until
dates when creating an indicator on the platform, these dates are used without modification.
If a data source does not provide validity dates, OpenCTI applies specific rules to determine these dates based on the \"main observable type\" of indicator and its associated markings.
Indicator type Marking TTL (in days) IPv4-Addr and IPv6-AddrTLP:CLEAR
to TLP:AMBER
30 IPv4-Addr and IPv6-Addr TLP:AMBER+STRICT
and TLP:RED
60 IPv4-Addr and IPv6-Addr Others 60 URL TLP:CLEAR
to TLP:GREEN
60 URL TLP:AMBER
to TLP:RED
180 URL Others 180 Others (e.g. Domain-Name, File, YARA) All 365"},{"location":"usage/indicators-lifecycle/#understanding-time-to-live-ttl","title":"Understanding Time-To-Live (TTL)","text":"The TTL represents the duration for which an indicator is considered valid - i.e. here, the number of days between valid_from
and valid_until
. After this period, the indicator is marked as revoked.
If a URL indicator with TLP:AMBER
marking is created without specific validity dates, it will be considered valid for 180 days from its valid_from
date. After 180 days, the valid_until
date will be reach and the indicator will be automatically revoked.
Understanding how OpenCTI calculates validity periods is essential for effective threat intelligence analysis. These rules ensure that your indicators are accurate and up-to-date, providing a reliable foundation for threat intelligence data.
"},{"location":"usage/inferences/","title":"Inferences and reasoning","text":""},{"location":"usage/inferences/#overview","title":"Overview","text":"OpenCTI\u2019s inferences and reasoning capability is a robust engine that automates the process of relationship creation within your threat intelligence data. This capability, situated at the core of OpenCTI, allows logical rules to be applied to existing relationships, resulting in the automatic generation of new, pertinent connections.
"},{"location":"usage/inferences/#understanding-inferences-and-reasoning","title":"Understanding inferences and reasoning","text":"Inferences and reasoning serve as OpenCTI\u2019s intelligent engine. It interprets your data logically. By activating specific predefined rules (of which there are around twenty), OpenCTI can deduce new relationships from the existing ones. For instance, if there's a connection indicating an Intrusion Set targets a specific country, and another relationship stating that this country is part of a larger region, OpenCTI can automatically infer that the Intrusion Set also targets the broader region.
"},{"location":"usage/inferences/#key-benefits","title":"Key benefits","text":"When you activate an inference rule, OpenCTI continuously analyzes your existing relationships and applies the defined logical rules. These rules are logical statements that define conditions for new relationships. When the set of conditions is met, the OpenCTI creates the corresponding relationship automatically.
For example, if you activate a rule as follows:
IF [Entity A targets Identity B] AND [Identity B is part of Identity C] THEN [Entity A targets Identity C]
OpenCTI will apply this rule to existing data. If it finds an Intrusion Set (\"Entity A\") targeting a specific country (\"Identity B\") and that country is part of a larger region (\"Identity C\"), the platform will automatically establish a relationship between the Intrusion Set and the region.
"},{"location":"usage/inferences/#identifying-inferred-relationships","title":"Identifying inferred relationships","text":"In the knowledge graphs: Inferred relationships are represented by dotted lines of a different color, distinguishing them from non-inferred relations.
In the lists: In a relationship list, a magic wand icon at the end of the line indicates relationship created by inference.
"},{"location":"usage/inferences/#additional-resources","title":"Additional resources","text":"Manual data creation in OpenCTI is an intuitive process that occurs throughout the platform. This page provides guidance on two key aspects of manual creation: Entity creation and Relationship creation.
"},{"location":"usage/manual-creation/#entity-creation","title":"Entity creation","text":"To create an entity:
Before delving into the creation of relationships between objects in OpenCTI, it's crucial to grasp some foundational concepts. Here are key points to understand:
Now, let\u2019s explore the process of creating relationships. To do this, we will differentiate the case of containers from the others.
"},{"location":"usage/manual-creation/#for-container","title":"For container","text":"When it comes to creating relationships within containers in OpenCTI, the process is straightforward. Follow these steps to attach objects to a container:
When creating relationships not involving a container, the creation method is distinct. Follow these steps to create relationships between entities:
While the aforementioned methods are primary for creating entities and relationships, OpenCTI offers versatility, allowing users to create objects in various locations within the platform. Here's a non-exhaustive list of additional places that facilitate on-the-fly creation:
These supplementary methods offer users flexibility and convenience, allowing them to adapt their workflow to various contexts within the OpenCTI platform. As users explore the platform, they will naturally discover additional means of creating entities and relationships.
"},{"location":"usage/merging/","title":"Merge objects","text":""},{"location":"usage/merging/#introduction","title":"Introduction","text":"OpenCTI\u2019s merge capability stands as a pivotal tool for optimizing threat intelligence data, allowing to consolidate multiple entities of the same type. This mechanism serves as a powerful cleanup tool, harmonizing the platform and unifying scattered information. In this section, we explore the significance of this feature, the process of merging entities, and the strategic considerations involved.
"},{"location":"usage/merging/#data-streamlining","title":"Data streamlining","text":"In the ever-expanding landscape of threat intelligence and the multitude of names chosen by different data sources, data cleanliness is essential. Duplicates and fragmented information hinder efficient analysis. The merge capability is a strategic solution for amalgamating related entities into a cohesive unit. Central to the merging process is the selection of a main entity. This primary entity becomes the anchor, retaining crucial attributes such as name and description. Other entities, while losing specific fields like descriptions, are aliased under the primary entity. This strategic decision preserves vital data while eliminating redundancy.
"},{"location":"usage/merging/#preserving-entity-relationships","title":"Preserving entity relationships","text":"One of the key feature of the merge capability is its ability to preserve relationships. While merging entities, their interconnected relationships are not lost. Instead, they seamlessly integrate into the new, merged entity. This ensures that the intricate web of relationships within the data remains intact, fostering a comprehensive understanding of the threat landscape.
"},{"location":"usage/merging/#conclusion","title":"Conclusion","text":"OpenCTI\u2019s merge capability helps improve the quality of threat intelligence data. By consolidating entities and centralizing relationships, OpenCTI empowers analysts to focus on insights and strategies, unburdened by data silos or fragmentation. However, exercising caution and foresight in the merging process is essential, ensuring a robust and streamlined knowledge basis.
"},{"location":"usage/merging/#additional-resources","title":"Additional resources","text":"In the STIX 2.1 standard, objects can:
attributes
, by referencing one or multiple IDs.{\n\"type\": \"intrusion-set\",\n\"spec_version\": \"2.1\",\n\"id\": \"intrusion-set--4e78f46f-a023-4e5f-bc24-71b3ca22ec29\",\n\"created_by_ref\": \"identity--f431f809-377b-45e0-aa1c-6a4751cae5ff\", // nested reference to an identity\n\"object_marking_refs\": [\"marking-definition--34098fce-860f-48ae-8e50-ebd3cc5e41da\"], // nested reference to multiple marking defintions\n\"external_references\": [\n{\n\"source_name\": \"veris\",\n\"external_id\": \"0001AA7F-C601-424A-B2B8-BE6C9F5164E7\",\n\"url\": \"https://github.com/vz-risk/VCDB/blob/125307638178efddd3ecfe2c267ea434667a4eea/data/json/validated/0001AA7F-C601-424A-B2B8-BE6C9F5164E7.json\", }\n],\n\"created\": \"2016-04-06T20:03:48.000Z\",\n\"modified\": \"2016-04-06T20:03:48.000Z\",\n\"name\": \"Bobcat Breakin\",\n\"description\": \"Incidents usually feature a shared TTP of a bobcat being released within the building containing network access...\",\n\"aliases\": [\"Zookeeper\"],\n\"goals\": [\"acquisition-theft\", \"harassment\", \"damage\"]\n}\n
In the previous example, we have 2 nested references to other objects in:
\"created_by_ref\": \"identity--f431f809-377b-45e0-aa1c-6a4751cae5ff\", // nested reference to an identity\n\"object_marking_refs\": [\"marking-definition--34098fce-860f-48ae-8e50-ebd3cc5e41da\"], // nested reference to multiple marking defintions\n
But we also have a nested object within the entity (an External Reference
):
\"external_references\": [\n{\n\"source_name\": \"veris\",\n\"external_id\": \"0001AA7F-C601-424A-B2B8-BE6C9F5164E7\",\n\"url\": \"https://github.com/vz-risk/VCDB/blob/125307638178efddd3ecfe2c267ea434667a4eea/data/json/validated/0001AA7F-C601-424A-B2B8-BE6C9F5164E7.json\", }\n]\n
"},{"location":"usage/nested/#implementation","title":"Implementation","text":""},{"location":"usage/nested/#modelization","title":"Modelization","text":"In OpenCTI, all nested references and objects are modelized as relationships, to be able to pivot more easily on labels, external references, kill chain phases, marking definitions, etc.
"},{"location":"usage/nested/#import-export","title":"Import & export","text":"When importing and exporting data to/from OpenCTI, the translation between nested references and objects to full-fledged nodes and edges is automated and therefore transparent for the users. Here is an example with the object in the graph above:
{\n\"id\": \"file--b6be3f04-e50f-5220-af3a-86c2ca66b719\",\n\"spec_version\": \"2.1\",\n\"x_opencti_description\": \"...\",\n\"x_opencti_score\": 50,\n\"hashes\": {\n\"MD5\": \"b502233b34256285140676109dcadde7\"\n},\n\"labels\": [\n\"cookiecutter\",\n\"clouddata-networks-1\"\n],\n\"external_references\": [\n{\n\"source_name\": \"Sekoia.io\",\n\"url\": \"https://app.sekoia.io/intelligence/objects/indicator--3e6d61b4-d5f0-48e0-b934-fdbe0d87ab0c\"\n}\n],\n\"x_opencti_id\": \"8a3d108f-908c-4833-8ff4-4d6fc996ce39\",\n\"type\": \"file\",\n\"created_by_ref\": \"identity--b5b8f9fc-d8bf-5f85-974e-66a7d6f8d4cb\",\n\"object_marking_refs\": [\n\"marking-definition--613f2e26-407d-48c7-9eca-b8e91df99dc9\"\n]\n}\n
"},{"location":"usage/notifications/","title":"Notifications and alerting","text":"It is possible to receive notifications
through different notifier connectors (e.g email or directly on the platform interface) triggered by events such as entity creation
, modification
or deletion
.
"},{"location":"usage/notifications/#triggers","title":"Triggers","text":"
Each user can create their own triggers. Triggers listen all the events that respect their filters and their event types, and notify the user of those events via the chosen notifier(s).
A platform administrator can create and manage triggers for a user, who will remain the trigger administrator
, as well as for a group or an organization. Users belonging to this group or organization will then have read-only
access rights on this trigger. The user can use filters to ensure that the created triggers are as accurate as possible.
Instance triggers are specific live triggers that listen to one or several instance(s). To create an instance trigger, you can
An instance trigger on an entity X notifies the following events:
Note: The notification of an entity deletion can either provides from the real deletion of an entity, either from a modification of the entity that leads to the user loss of visibility for the entity.
"},{"location":"usage/notifications/#digest","title":"Digest","text":"A digest allows triggering the sending of notifications based on multiple triggers
over a given period.
OpenCTI as some built-in notifier connectors that can be used as notifier in for Notification and Activity alerting. Connectors are registered with a schema describing how the connector will interact. For example, the webhook connector has the following schema: - A verb (GET, POST, PUT, ...) - A URL - A template - Some params & headers send through the request
OpenCTI provides 3 built-in connectors: a webhook connector, a simplified email connector and a platform mailer connector. By default, OpenCTI also provides 2 sample notifiers to communicate to Teams through a webhook.
"},{"location":"usage/notifications/#usage","title":"Usage","text":"The notifiers configured in the admin section can be protected through RBAC and only accessible to specific User/Group/Organization. Those specified members can use the notifiers directly when configuring their triggers/digest/activity alerts.
The 2 built-in notifiers are still available: Default mailer and User interface
"},{"location":"usage/notifiers/","title":"Notifiers","text":""},{"location":"usage/notifiers/#sample-notifiers","title":"Sample notifiers","text":""},{"location":"usage/notifiers/#configure-teams-webhook","title":"Configure Teams webhook","text":"You can check the Microsoft website
"},{"location":"usage/notifiers/#default-teams-message-for-live-trigger","title":"Default teams message for live trigger","text":"The default configuration for a Teams message sent through webhook for a live notification is:
{\n \"template\": {\n \"type\": \"message\",\n \"attachments\": [\n {\n \"contentType\": \"application/vnd.microsoft.card.thumbnail\",\n \"content\": {\n \"subtitle\": \"Operation : <%=content[0].events[0].operation%>\",\n \"text\": \"<%=(new Date(notification.created)).toLocaleString()%>\",\n \"title\": \"<%=content[0].events[0].message%>\",\n \"buttons\": [\n {\n \"type\": \"openUrl\",\n \"title\": \"See in OpenCTI\",\n \"value\": \"https://YOUR_OPENCTI_URL/dashboard/id/<%=content[0].events[0].instance_id%>\"\n }\n ]\n }\n }\n ]\n }\n \"url\": \"https://YOUR_DOMAIN.webhook.office.com/YOUR_ENDPOINT\",\n \"verb\": \"POST\"\n}\n
"},{"location":"usage/notifiers/#default-teams-message-for-digest","title":"Default teams message for digest","text":"The default configuration for a Teams message sent through webhook for a digest notification is:
{\n \"template\": {\n \"type\": \"message\",\n \"attachments\": [\n {\n \"contentType\": \"application/vnd.microsoft.card.adaptive\",\n \"content\": {\n \"$schema\": \"http://adaptivecards.io/schemas/adaptive-card.json\",\n \"type\": \"AdaptiveCard\",\n \"version\": \"1.0\",\n \"body\": [\n {\n \"type\": \"Container\",\n \"items\": [\n {\n \"type\": \"TextBlock\",\n \"text\": \"<%=notification.name%>\",\n \"weight\": \"bolder\",\n \"size\": \"extraLarge\"\n }, {\n \"type\": \"TextBlock\",\n \"text\": \"<%=(new Date(notification.created)).toLocaleString()%>\",\n \"size\": \"medium\"\n }\n ]\n },\n <% for(var i=0; i<content.length; i++) { %>\n {\n \"type\": \"Container\",\n \"items\": [<% for(var j=0; j<content[i].events.length; j++) { %>\n {\n \"type\" : \"TextBlock\",\n \"text\" : \"[<%=content[i].events[j].message%>](https://YOUR_OPENCTI_URL/dashboard/id/<%=content[i].events[j].instance_id%>)\"\n }<% if(j<(content[i].events.length - 1)) {%>,<% } %>\n <% } %>]\n }<% if(i<(content.length - 1)) {%>,<% } %>\n <% } %>\n ]\n }\n }\n ],\n \"dataString\": <%-JSON.stringify(notification)%>\n },\n \"url\": \"https://YOUR_DOMAIN.webhook.office.com/YOUR_ENDPOINT\",\n \"verb\": \"POST\"\n}\n
"},{"location":"usage/overview/","title":"Overview","text":""},{"location":"usage/overview/#introduction","title":"Introduction","text":"The following chapter aims at giving the reader a step-by-step description of what is available on the platform and the meaning of the different tabs and entries.
When the user connects to the platform, the home page is the Dashboard
. This Dashboard
contains several visuals summarizing the types and quantity of data recently imported into the platform.
Dashboard
To get more information about the components of the default dashboard, you can consult the Getting started.
The left side panel allows the user to navigate through different windows and access different views and categories of knowledge.
"},{"location":"usage/overview/#structure","title":"Structure","text":""},{"location":"usage/overview/#the-hot-knowledge","title":"The \"hot knowledge\"","text":"The first part of the platform in the left menu is dedicated to what we call the \"hot knowledge\", which means this is the entities and relationships which are added on a daily basis in the platform and which generally require work / analysis from the users.
Analyses
: all containers which convey relevant knowledge such as reports, groupings and malware analyses.Cases
: all types of case like incident responses, requests for information, for takedown, etc.Events
: all incidents & alerts coming from operational systems as well as sightings.Observations
: all technical data in the platform such as observables, artifacts and indicators.The second part of the platform in the left menu is dedicated to the \"cold knowledge\", which means this is the entities and relationships used in the hot knowledge. You can see this as the \"encyclopedia\" of all pieces of knowledge you need to get context: threats, countries, sectors, etc.
Threats
: all threats entities from campaigns to threat actors, including intrusion sets.Arsenal
: all tools and pieces of malware used and/or targeted by threats, including vulnerabilities.Techniques
: all objects related to tactics and techniques used by threats (TTPs, etc.).Entities
: all non-geographical contextual information such as sectors, events, organizations, etc.Locations
: all geographical contextual information, from cities to regions, including precise positions.You can customize the experience in the platform by hiding some categories in the left menu, whether globally or for a specific role.
"},{"location":"usage/overview/#hide-categories-globally","title":"Hide categories globally","text":"In the Settings > Parameters
, it is possible for the platform administrator to hide categories in the platform for all users.
In OpenCTI, the different roles are highly customizable. It is possible to defined default dashboards, triggers, etc. but also be able to hide categories in the roles:
"},{"location":"usage/overview/#presentation-of-a-typical-page-in-opencti","title":"Presentation of a typical page in OpenCTI","text":"Although there are many different entities in OpenCTI and many different tabs, most of them are quite similar and only have minor differences from the other, mostly due to some of their characteristics, which requires specific fields or do not require some fields which are necessary for the other.
In this part will only be detailed a general outline of a \"typical\" OpenCTI page. The specifies of the different entities will be detailed in the corresponding pages below (Activities and Knowledge).
"},{"location":"usage/overview/#overview_1","title":"Overview","text":"In the Overview
tab on the entity, you will find all properties of the entity as well as the recent activities.
First, you will find the Details
section, where are displayed all properties specific to the type of entity you are looking at, an example below with a piece of malware:
Thus, in the Basic information
section, are displayed all common properties to all objects in OpenCTI, such as the marking definition, the author, the labels (i.e. tags), etc.
Below these two sections, you will find latest modifications in the Knowledge base related to the Entity:
Latest created relationships
: display the latest relationships that have been created from or to this Entity. For example, latest Indicators of Compromise and associated Threat Actor of a Malware.latest containers about the object
: display all the Cases and Analyses that contains this Entity. For example, the latest Reports about a Malware.External references
: display all the external sources associated with the Entity. You will often find here links to external reports or webpages from where Entity's information came from.History
: display the latest chronological modifications of the Entity and its relationships that occurred in the platform, in order to traceback any alteration.
Last, all Notes written by users of the platform about this Entity are displayed in order to access unstructured analysis comments.
"},{"location":"usage/overview/#knowledge","title":"Knowledge","text":"In the Knowledge
tab, which is the central part of the entity, you will find all the Knowledge related to the current entity. The Knowledge
tab is different for Analyses (Report
, Groupings
) and Cases (Incident response
, Request for Information
, Request for Takedown
) entities than for all the other entity types.
Knowledge
tab of those entities (who represents Analyses or Cases that can contains a collection of Objects) is the place to integrate and link together entities. For more information on how to integrate information in OpenCTI using the knowledge tab of a report, please refer to the part Manual creation.Knowledge
tabs of any other entity (that does not aim to contain a collection of Objects) gather all the entities which have been at some point linked to the entity the user is looking at. For instance, as shown in the following capture, the Knowledge
tab of Intrusion set APT29, gives access to the list of all entities APT29 is attributed to, all victims the intrusion set has targeted, all its campaigns, TTPs, malware etc. For entities to appear in these tabs under Knowledge
, they need to have been linked to the entity directly or have been computed with the inference engine.The Indicators
and Observables
section offers 3 display modes: - The entities view
, which displays the indicators/observables linked to the entity. - The relationship view
, which displays the various relationships between the indicators/observables linked to the entity and the entity itself. - The contextual view
, which displays the indicators/observables contained in the cases and analyses that contain the entity.
The Content
tab allows for uploading and creating outcomes documents related to the content of the current entity (in PDF, text, HTML or markdown files). This specific tab enable to previzualize, manage and write deliverable associated with the entity. For example an analytic report to share with other teams, a markdown files to feed a collaborative wiki with, etc.
The Content
tab is available for a subset of entities: Report
, Incident
, Incident response
, Request for Information
, and Request for Takedown
.
The Analyses
tab contains the list of all Analyses (Report
, Groupings
) and Cases (Incident response
, Request for Information
, Request for Takedown
) in which the entity has been identified.
By default, this tab display the list, but you can also display the content of all the listed Analyses on a graph, allowing you to explore all their Knowledge and have a glance of the context around the Entity.
"},{"location":"usage/overview/#data","title":"Data","text":"The Data
tab contains documents that are associated to the object and were either:
Analyst Workbench can also be created from here. They will contain the entity by default.
In addition, the Data
tab of Threat actors (group)
, Threat actors (individual)
, Intrusions sets
, Organizations
, Individuals
have an extra panel:
"},{"location":"usage/overview/#history","title":"History","text":"
The History
tab display the history of change of the Entity, update of attributes, creation of relations, ...
Because of the volumes of information the history is written in a specific index that consume the redis stream to rebuild the history for the UI.
"},{"location":"usage/overview/#less-frequent-tabs","title":"Less frequent tabs","text":"Observables
tab (for Reports and Observed data): A table containing all SCO (Stix Cyber Observable) contained in the Report or the Observed data, with search and filters available. It also displays if the SCO has been added directly or through inferences with the reasoning engineEntities
tab (for Reports and Observed data): A table containing all SDO (Stix Domain Objects) contained in the Report or the Observed data, with search and filters available. It also displays if the SDO has been added directly or through inferences with the reasoning engineSightings
tab (for Indicators and Observables): A table containing all Sightings
relationships corresponding to events in which Indicators
(IP, domain name, url, etc.) are detected by or within an information system, an individual or an organization. Most often, this corresponds to a security event transmitted by a SIEM or EDR.In Opencti, all data can be represented as a large knowledge graph: everything is linked to something. You can pivot on any entity and on any relationship you have in your platform, using investigations.
Investigations are available on the top right of the top bar:
Investigations are organized by workspace. When you create a new empty workspace, it will only be visible by you and enables you to work on your investigation before sharing it.
In your workspace, you can add entities that you want to investigate, visualize the data linked to these entities, add relationships, and export your investigation graph in pdf, image or as new stix report.
You can see next to them a bullet with a number inside. It is a visual indication showing you how many entities are linked to this one and not displayed in the graph yet. Note that this number is an approximation of the number of entities. That's why there is a ~
next to the number.
No bullet displayed means there is nothing to expand from this node.
"},{"location":"usage/pivoting/#add-and-expand-an-entity","title":"Add and expand an entity","text":"You can add any existing entity of the platform to your investigation.
Once added, you can select the entity, and see its details in the panel that appears on the right of the screen.
In the same menu as above, right next to \"Add en entity\", you can expand the selected entity. Clicking on the menu icon open a new window where you can choose which type of entities and relationships you want to expand.
For each type of entity or relationship, the number of elements that will be added into the investigation graph is displayed in parentheses. This time there is no ~
symbol as the number is exact.
For example, in the image above, selecting target Malware and relationship Uses means: expand in my investigation graph all Malwares linked to this node with a relationship of type Uses.
"},{"location":"usage/pivoting/#add-a-relationship","title":"Add a relationship","text":"You can add a relationship between entities directly in your investigation.
"},{"location":"usage/pivoting/#export-your-investigation","title":"Export your investigation","text":"You can export your investigation in PDF or image format. You can also download all the content of your investigation graph in a Report stix bundle (investigation is automatically converted).
"},{"location":"usage/pivoting/#turn-your-investigation-to-report-or-case","title":"Turn your investigation to Report or Case","text":"You can turn your investigation to : - a grouping - an incident response - a report - a request for information - a request for takedown
Either, you create a new report or case
Or, you select an existing entity
Once you have clicked on the ADD
button, the browser will be redirected to the Knowledge
tab of the Report or Cases you added the content of your investigation. If you added it to multiple reports or cases, you will be redirected to the first of the list.
In (Cyber) Threat Intelligence, evaluation of information sources and of information quality is one of the most important aspect of the work. It is of the utter most importance to assess situations by taking into account reliability of the sources and credibility of the information.
This concept is foundational in OpenCTI, and have real impact on:
Reliability of a source of information is a measurement of the trust that the analyst can have about the source, based on the technical capabilities or history of the source. Is the source a reliable partner with long sharing history? A competitor? Unknown?
Reliability of sources are often stated at organizational level, as it requires an overview of the whole history with it.
In the Intelligence field, Reliability is often notated with the NATO Admiralty code.
"},{"location":"usage/reliability-confidence/#what-is-confidence-of-an-information","title":"What is Confidence of an information?","text":"Reliability of a source is important but even a trusted source can be wrong. Information in itself has a credibility, based on what is known about the subject and the level of corroboration by other sources.
Credibility is often stated at the analyst team level, expert of the subject, able to judge the information with its context.
In the Intelligence field, Confidence is often notated with the NATO Admiralty code.
Why Confidence instead of Credibility?
Using both Reliability and Credibility is an advanced use case for most of CTI teams. It requires a mature organization and a well staffed team. For most of internal CTI team, a simple confidence level is enough to forge assessment, in particular for teams that concentrate on technical CTI.
Thus in OpenCTI, we have made the choice to fuse the notion of Credibility with the Confidence level that is commonly used by the majority of users. They have now the liberty to push forward their practice and use both Confidence and Reliability in their daily assessments.
"},{"location":"usage/reliability-confidence/#reliability-open-vocabulary","title":"Reliability open vocabulary","text":"Reliability value can be set for every Entity in the platform that can be Author of Knowledge:
Organizations
Individuals
Systems
Reports
Reliability on Reports
allows you to specify the reliability associated to the original author of the report if you received it through a provider.
For all Knowledge in the platform, the reliability of the source of the Knowledge (author) is displayed in the Overview. This way, you can always forge your assessment of the provided Knowledge regarding the reliability of the author.
You can also now filter entities by the reliability of its author.
Tip
This way, you may choose to feed your work with only Knowledge provided by reliable sources.
Reliability is an open vocabulary that can be customized in Settings -> Taxonomies -> Vocabularies : reliability_ov.
Info
The setting by default is the Reliability scale from NATO Admiralty code. But you can define whatever best fit your organization.
"},{"location":"usage/reliability-confidence/#confidence-scale","title":"Confidence scale","text":"Confidence level can be set for:
Report
, Grouping
, Malware analysis
, Notes
Incident Response
, Request for Information
, Request for Takedown
, Feedback
Incident
, Sighting
, Observed data
Indicator
, Infrastructure
Threat actor (Group)
, Threat actor (Individual)
, Intrusion Set
, Campaign
Malware
, Channel
, Tool
, Vulnerability
For all of these entities, the Confidence level is displayed in the Overview, along with the Reliability. This way, you can rapidly assess the Knowledge with the Confidence level representing the credibility/quality of the information.
"},{"location":"usage/reliability-confidence/#confidence-scale-customization","title":"Confidence scale customization","text":"Confidence level is a numerical value between 0 and 100. But Multiple \"Ticks\" can be defined and labelled to provide a meaningful scale.
Confidence level can be customized for each entity type in Settings > Customization > Entity type.
As such customization can be cumbersome, three confidence level templates are provided in OpenCTI:
It is always possible to modify an existing template to define a custom scale adapted to your context.
Tip
If you use the Admiralty code setting for both reliability and Confidence, you will find yourself with the equivalent of NATO confidence notation in the Overview of your different entities (A1, B2, C3, etc.)
"},{"location":"usage/reliability-confidence/#usage-in-opencti","title":"Usage in OpenCTI","text":""},{"location":"usage/reliability-confidence/#example-with-the-admiralty-code-template","title":"Example with the admiralty code template","text":"Your organization have received a report from a CTI provider. At your organization level, this provider is considered as reliable most of the time and its reliability level has been set to \"B - Usually Reliable\" (your organization uses the Admiralty code).
This report concerns ransomware threat landscape and have been analysed by your CTI analyst specialized in cybercrime. This analyst has granted a confidence level of \"2 - Probably True\" to the information.
As a technical analyst, through the cumulated reliability and Confidence notations, you now know that the technical elements of this report are probably worth consideration.
"},{"location":"usage/reliability-confidence/#example-with-the-objective-template","title":"Example with the Objective template","text":"As a CTI analyst in a governmental CSIRT, you build up Knowledge that will be shared within the platform to beneficiaries. Your CSIRT is considered as a reliable source by your beneficiaries, even if you play a role of a proxy with other sources, but your beneficiaries need some insights about how the Knowledge has been built/gathered.
For that, you use the \"Objective\" confidence scale in your platform to provide beneficiaries with that. When the Knowledge is the work of the investigation of your CSIRT, either from incident response or attack infrastructure investigation, you set the confidence level to \"Witnessed\", \"Deduced\" or \"Induced\" (depending on if you observed directly the data, or inferred it during your research). When the information has not been verified by the CSIRT but has value to be shared with beneficiaries, you can use the \"Told\" level to make it clear to them that the information is probably valuable but has not been verified.
"},{"location":"usage/search/","title":"Search for knowledge","text":"In OpenCTI, you have access to different capabilities to be able to search for knowledge in the platform. In most cases, a search by keyword can be refined with additional filters for instance on the type of object, the author etc.
"},{"location":"usage/search/#global-search","title":"Global search","text":"The global search is always available in the top bar of the platform.
This search covers all STIX Domain Objects (SDOs) and STIX Cyber Observables (SCOs) in the platform. The search results are sorted according to the following behaviour:
name
, the aliases
and the description
attributes (full text search).If you get unexpected result, it is always possible to add some filters after the initial search:
Also, using the Advanced search
button, it is possible to directly put filters in a global search:
The bulk search capabilities in available in the top bar of the platform and allow you to copy paste a list of keyword or objects (ie. list of domains, list of IP addresses, list of vulnerabilities, etc.) to search in the platform:
When searching in bulk, OpenCTI is only looking for an exact match in some properties:
name
aliases
x_opencti_aliases
x_mitre_id
value
subject
abstract
hashes_MD5
hashes_SHA1
hashes_SHA256
hashes_SHA512
x_opencti_additional_names
When something is not found, it appears in the list as Unknown
and will be excluded if you choose to export your search result in a JSON STIX bundle or in a CSV file.
In most of the screens of knowledge, you always have a contextual search bar allowing you to filter the list you are on:
The search keyword used here is taken into account if you decide to export the current view in a file such as a JSON STIX bundle or a CSV file.
"},{"location":"usage/search/#other-search-bars","title":"Other search bars","text":"Some other screens can contain search bars for specific purposes. For instance, in the graph views to filter the nodes displayed on the graph:
"},{"location":"usage/workbench/","title":"Analyst workbench","text":"Workbenches serve as dedicated workspaces for manipulating data before it is officially imported into the platform.
"},{"location":"usage/workbench/#location-of-use","title":"Location of use","text":"The workbenches are located at various places within the platform:
"},{"location":"usage/workbench/#data-import-and-analyst-workbenches-window","title":"Data import and analyst workbenches window","text":"This window encompasses all the necessary tools for importing a file. Files imported through this interface will subsequently be processed by the import connectors, resulting in the creation of workbenches. Additionally, analysts can manually create a workbench by clicking on the \"+\" icon at the bottom right of the window.
"},{"location":"usage/workbench/#data-tabs-of-all-entities","title":"Data tabs of all entities","text":"Workbenches are also accessible through the \"Data\" tabs of entities, providing convenient access to import data associated with the entity.
"},{"location":"usage/workbench/#operation","title":"Operation","text":"Workbenches are automatically generated upon the import of a file through an import connector. When an import connector is initiated, it scans files for recognizable entities and subsequently creates a workbench. All identified entities are placed within this workbench for analyst reviews. Alternatively, analysts have the option to manually create a workbench by clicking on the \"+\" icon at the bottom right of the \"Data import and analyst workbenches\" window.
The workbench being a draft space, the analysts use it to review connector proposals before finalizing them for import. Within the workbench, analysts have the flexibility to add, delete, or modify entities to meet specific requirements.
Once the content within the workbench is deemed acceptable, the analyst must initiate the ingestion process by clicking on Validate this workbench
. This action signifies writing the data in the knowledge base.
Workbenches are drafting spaces
Until the workbench is validated, the contained data remains in draft form and is not recorded in the knowledge base. This ensures that only reviewed and approved data is officially integrated into the platform.
For more information on importing files, refer to the Import from files documentation page.
"},{"location":"usage/workflows/","title":"Workflows and assignation","text":"Under construction
We are doing our best to complete this page. If you want to participate, don't hesitate to join the Filigran Community on Slack or submit your pull request on the Github doc repository.
"}]} \ No newline at end of file diff --git a/5.12.X/sitemap.xml b/5.12.X/sitemap.xml new file mode 100755 index 00000000..94142e31 --- /dev/null +++ b/5.12.X/sitemap.xml @@ -0,0 +1,403 @@ + +Enterprise edition
+Playbooks automation is available under the "Filigran Entreprise Edition" license.
+ +OpenCTI playbooks are flexible automation scenarios which can be fully customized and enabled by platform administrators to enrich, filter and modify the data created or updated in the platform.
+Playbook automation is accessible in the user interface under Data/Processing/Playbooks.
+You need the "Manage credentials" capability to use the Playbooks automation, because you will be able to manipulate data simple users cannot access.
+You will then be able to:
+Consider Playbook as STIX 2.1 bundle pipeline.
+Initiating with a component listening to a data stream, each subsequent component in the playbook processes a received STIX bundle. These components have the ability to modify the bundle and subsequently transmit the altered result to connected components.
+In this paradigm, components can send out the STIX 2.1 bundle to multiple components, enabling the development of multiple branches within your playbook.
+A well-designed playbook end with a component executing an action based on the processed information. For instance, this may involve writing the STIX 2.1 bundle in a data stream.
+Validate ingestion
+The STIX bundle processed by the playbook won't be written in the platform without specifying it using the appropriate component, i.e. "Send for ingestion".
+It is possible to create as many playbooks as needed which are running independently. You can give a name and description to each playbook.
+ +The first step to define in the playbook is the “triggering event”, which can be any knowledge event (create, update or delete) with customizable filters. To do so, click on the grey rectangle in the center of the workspace and choose the component to "listen knowledge events". Configure it with adequate filters. You can use same filters as in other part of the platform.
+ +Then you have flexible choices for the next steps to:
+Do not forget to start your Playbook when ready, with the Start option of the burger button placed near the name of your Playbook.
+By clicking the burger button of a component, you can replace it by another one.
+By clicking on the arrow icon in the bottom right corner of a component, you can develop a new branch at the same level.
+By clicking the "+" button on a link between components, you can insert a component between the two.
+Will write the received STIX 2.1 bundle in platform logs with configurable log level and then send out the STIX 2.1 bundle unmodified.
+Will pass the STIX 2.1 bundle to be written in the data stream. This component has no output and should end a branch of your playbook.
+Will allow you to define filter and apply it to the received STIX 2.1 bundle. The component has 2 output, one for data matching the filter and one for the remainder. +By default, filtering is applied to entities having triggered the playbook. You can toggle the corresponding option to apply it to all elements in the bundle (elements that might result from enrichment for example).
+Will send the received STIX 2.1 bundle to a compatible enrichement connector and send out the modifed bundle.
+Will add, replace or remove compatible attribute of the entities contains in the received STIX 2.1 bundle and send out the modified bundle. +By default, modification is applied to entities having triggered the playbook. You can toggle the corresponding option to apply it to all elements in the bundle (elements that might result from enrichment for example).
+Will modify the received STIX 2.1 bundle to include the entities into an container of the type you configured. +By default, wrapping is applied to entities having triggered the playbook. You can toggle the corresponding option to apply it to all elements in the bundle (elements that might result from enrichment for example).
+Will share every entity in the received STIX 2.1 bundle with Organizations you configured. Your platform need to have declare a platform main organization in Settings/Parameters.
+Will apply a complex automation built-in rule. This kind of rule might impact performance. Current rules are: +* First/Last seen computing extension from report publication date: will populate first seen and last seen date of entities contained in the report based on its publication date. +* Resolve indicators based on observables (add in bundle) +* Resolve observables an indicator is based on (add in bundle) +* Resolve container references (add in bundle)
+Will generate a Notification each time a STIX 2.1 bundle is received.
+Will generate indicator based on observables contained in the received STIX 2.1 bundle. +By default, it is applied to entities having triggered the playbook. You can toggle the corresponding option to apply it to all observables in the bundle (observables that might result from enrichment for example).
+Will extract observables based on indicators contained in the received STIX 2.1 bundle. +By default, it is applied to entities having triggered the playbook. You can toggle the corresponding option to apply it to all indicators in the bundle (indicators that might result from enrichment for example).
+Will elagate the received STIX 2.1 bundle based on the configured filter.
+At the top right of the interface, you can access execution trace of your playbook and consult the raw data after every step of your playbook execution.
+ + + + + + + + + + + + + + + + + + + +Three types of tasks are done in the background:
+Rule tasks can be seen and activated in Settings > Customization > Rules engine. +Knowledge and user tasks can be seen and managed in Data > Background Tasks. The scope of each task is indicated.
+ +If a rule task is enabled, it leads to the scan of the whole platform data and the creation of entities or relationships in case a configuration corresponds to the tasks rules. The created data are called 'inferred data'. Each time an event occurs in the platform, the rule engine checks if inferred data should be updated/created/deleted.
+Knowledge tasks are background tasks updating or deleting entities and correspond to mass operations on these data. To create one, select entities via the checkboxes in an entity list, and choose the action to perform via the toolbar.
+User tasks are background tasks updating or deleting notifications. It can be done from the Notification section, by selecting several notifications via the checkboxes, and choosing an action via the toolbar.
+Compiling CTI data in one place, deduplicate and correlate to transform it into Intelligence is very important. But ultimately, you need to act based on this Intelligence. Some situations will need to be taken care of, like cybersecurity incidents, requests for information or requests for takedown. Some actions will then need to be traced, to be coordinated and oversaw. Some actions will include feedback and content delivery.
+OpenCTI includes Cases to allow organizations to manage situations and organize their team's work. Better, by doing Case management in OpenCTI, you handle your cases with all the context and Intelligence you need, at hand.
+Multiple situations can be modelize in OpenCTI as a Case, either an Incident Response, a Request for Takedown or a Request for Information.
+ +All Cases can contain any entities and relationships you need to represent the Intelligence context related to the situation. At the beginning of your case, you may find yourself with only some Observables sighted in a system. At the end, you may have Indicators, Threat Actor, impacted systems, attack patterns. All representing your findings, ready to be presented and exported as graph, pdf report, timeline, etc.
+ +Some Cases may need some collaborative work and specific Tasks to be performed by people that have the skillset for. OpenCTI allows you to associate Tasks
in your Cases and assign them to users in the platform. As some type of situation may need the same tasks to be done, it is also possible to pre-define lists of tasks to be applied on your case. You can define these lists by accessing the Settings/Taxonomies/Case templates panel. Then you just need to add it from the overview of your desire Case.
Tip: A user can have a custom dashboard showing him all the tasks that have been assigned to him.
+ +As with other objects in OpenCTI, you can also leverage the Notes
to add some investigation and analysis related comments, helping you shaping up the content of your case with unstructured data and trace all the work that have been done.
You can also use Opinions
to collect how the Case has been handled, helping you to build Lessons Learned.
To trace the evolution of your Case and define specific resolution worflows, you can use the Status
(that can be define in Settings/Taxonomies/Status templates).
At the end of your Case, you will certainly want to report on what have been done. OpenCTI allows you to export the content of the Case in a simple but customizable PDF (currently in refactor). But of course, your company have its own documents' templates, right? With OpenCTI, you will be able to include some nice graphics in it. For example, a Matrix view of the attacker attack pattern or even a graph display of how things are connected.
+Also, we are currently working a more meaningfull Timeline view that will be possible to export too.
+Sighting
relationship between your System "SIEM permiter A" and the Observable "bad.com". Incident
in this situation, and you have created an alert based on new Incident that send you email notification
and Teams message (webhook).campaign
targeting your activity sector
. "bad.com" is clearly something to investigate ASAP.Incident response
case. You position the priority to High, regarding the context, and the severity to Low, as you don't know yet if someone really interact with "bad.com"Task
in your case for verifying if an actual interaction happened with "bad.com".In the STIX 2.1 standard, some STIX Domain Objects (SDO) can be considered as "container of knowledge", using the object_refs
attribute to refer multiple other objects as nested references. In object_refs
, it is possible to refer to entities and relationships.
{
+ "type": "report",
+ "spec_version": "2.1",
+ "id": "report--84e4d88f-44ea-4bcd-bbf3-b2c1c320bcb3",
+ "created_by_ref": "identity--a463ffb3-1bd9-4d94-b02d-74e4f1658283",
+ "created": "2015-12-21T19:59:11.000Z",
+ "modified": "2015-12-21T19:59:11.000Z",
+ "name": "The Black Vine Cyberespionage Group",
+ "description": "A simple report with an indicator and campaign",
+ "published": "2016-01-20T17:00:00.000Z",
+ "report_types": ["campaign"],
+ "object_refs": [
+ "indicator--26ffb872-1dd9-446e-b6f5-d58527e5b5d2",
+ "campaign--83422c77-904c-4dc1-aff5-5c38f3a2c55c",
+ "relationship--f82356ae-fe6c-437c-9c24-6b64314ae68a"
+ ]
+}
+
In the previous example, we have a nested reference to 3 other objects:
+"object_refs": [
+ "indicator--26ffb872-1dd9-446e-b6f5-d58527e5b5d2",
+ "campaign--83422c77-904c-4dc1-aff5-5c38f3a2c55c",
+ "relationship--f82356ae-fe6c-437c-9c24-6b64314ae68a"
+]
+
In OpenCTI, containers are displayed differently than other entities, because they contain pieces of knowledge. Here is the list of containers in the platform:
+Type of entity | +STIX standard | +Description | +
---|---|---|
Report | +Native | +Reports are collections of threat intelligence focused on one or more topics, such as a description of a threat actor, malware, or attack technique, including context and related details. | +
Grouping | +Native | +A Grouping object explicitly asserts that the referenced STIX Objects have a shared context, unlike a STIX Bundle (which explicitly conveys no context). | +
Observed Data | +Native | +Observed Data conveys information about cyber security related entities such as files, systems, and networks using the STIX Cyber-observable Objects (SCOs). | +
Note | +Native | +A Note is intended to convey informative text to provide further context and/or to provide additional analysis not contained in the STIX Objects. | +
Opinion | +Native | +An Opinion is an assessment of the correctness of the information in a STIX Object produced by a different entity. | +
Case | +Extension | +A case whether an Incident Response, a Request for Information or a Request for Takedown is use to convey an epic with a set of tasks. | +
Task | +Extension | +A task, generally used in the context of case, is intended to convery information about something that must be done in a limited timeframe. | +
In the platform, it is always possible to visualize the list of entities and/or observables referenced in a container (Container > Entities or Observables
) but also to add / remove entities from the container.
As containers can also contain relationships, which are generally linked to the other entities in the container, it is also possible to visualize the container as a graph (Container > Knowledge
)
On the entity or the relationship side, you can always find all containers where the objecti is contained using the top menu Analysis
:
In all containers list, you can also filter containers based on one or multiple contained object(s):
+ + + + + + + + + + + + + + + + + + + +Organizations
, groups
, or users
who have access to a dashboard can have 3 levels of access:
+- admin
read, write, access management
+- edit
read and write
+- view
read-only
When a user creates a custom dashboard, it is only visible to themselves. They then have admin
access. They can then define who can access it and with what level of rights via the Manage access
button at the top right of the dashboard page.
They can give access to organizations, groups, users, but also to all users on the platform (everyone
).
It is important to note that a dashboard must have at least one user with admin
access level.
The OpenCTI core design relies on the concept of a knowledge graph, where you have two different kinds of object:
+entities
, which have some properties
or attributes
.relationships
, which are created between two entity
nodes and have some properties
or attributes
.Example
+An example would be that the entity APT28
has a relationship uses
to the malware entity Drovorub
.
To enable a unified approach in the description of threat intelligence knowledge as well as importing and exporting data, the OpenCTI data model is based on the STIX 2.1 standard. Thus we highly recommend to take a look to the STIX Introductory Walkthrough and to the different kinds of STIX relationships to get a better understanding of how OpenCTI works.
+Some more important STIX naming shortcuts are:
+In some cases, the model has been extended to be able to:
+amplifies
, publishes
, etc.You can find below the digram of all types of entities and relationships available in OpenCTI.
+ + +To get a comprehensive list of available properties for a given type of entity or relationship, you can use the GraphQL playground schema available in your "Profile > Playground". Then you can click on schema. You can for instance search for the keyword IntrusionSet
:
One of the core concept of the OpenCTI knowledge graph is all underlying mechanisms implemented to accurately de-duplicate and consolidate (aka. upserting
) information about entities and relationships.
When an object is created in the platform, whether manually by a user or automatically by the connectors / workers chain, the platform checks if something already exist based on some properties of the object. If the object already exists, it will return the existing object and, in some cases, update it as well.
+Technically, OpenCTI generates deterministic IDs based on the listed properties below to prevent duplicate (aka "ID Contributing Properties"). Also, it is important to note that there is a special link between name
and aliases
leading to not have entities with overlapping aliases or an alias already used in the name of another entity.
Type | +Attributes | +
---|---|
Area | +(name OR x_opencti_alias ) AND x_opencti_location_type |
+
Attack Pattern | +(name OR alias ) AND optional x_mitre_id |
+
Campaign | +name OR alias |
+
Channel | +name OR alias |
+
City | +(name OR x_opencti_alias ) AND x_opencti_location_type |
+
Country | +(name OR x_opencti_alias ) AND x_opencti_location_type |
+
Course Of Action | +(name OR alias ) AND optional x_mitre_id |
+
Data Component | +name OR alias |
+
Data Source | +name OR alias |
+
Event | +name OR alias |
+
Feedback Case | +name AND created (date) |
+
Grouping | +name AND context |
+
Incident | +name OR alias |
+
Incident Response Case | +name OR alias |
+
Indicator | +pattern OR alias |
+
Individual | +(name OR x_opencti_alias ) and identity_class |
+
Infrastructure | +name OR alias |
+
Intrusion Set | +name OR alias |
+
Language | +name OR alias |
+
Malware | +name OR alias |
+
Malware Analysis | +name OR alias |
+
Narrative | +name OR alias |
+
Note | +None | +
Observed Data | +name OR alias |
+
Opinion | +None | +
Organization | +(name OR x_opencti_alias ) and identity_class |
+
Position | +(name OR x_opencti_alias ) AND x_opencti_location_type |
+
Region | +name OR alias |
+
Report | +name AND published (date) |
+
RFI Case | +name AND created (date) |
+
RFT Case | +name AND created (date) |
+
Sector | +(name OR alias ) and identity_class |
+
Task | +None | +
Threat Actor | +name OR alias |
+
Tool | +name OR alias |
+
Vulnerability | +name OR alias |
+
The deduplication process of relationships is based on the following criterias:
+For STIX Cyber Observables, OpenCTI also generate deterministic IDs based on the STIX specification using the "ID Contributing Properties" defined for each type of observable.
+In cases where an entity already exists in the platform, incoming creations can trigger updates to the existing entity's attributes.
+Policy for handling entity updates
+If confidence_level
of the created entity is >= (greater than or equal) to the confidence_level
of the existing entity, the attributes will be updated. Notably, the confidence_level
will also be increased with the new one.
This logic has been implemented to converge the knowledge base towards the highest confidence and quality levels for both entities and relationships.
+ + + + + + + + + + + + + + + + + + +Under construction
+We are doing our best to complete this page. +If you want to participate, don't hesitate to join the Filigran Community on Slack +or submit your pull request on the Github doc repository.
+When you click on "Analyses" in the left-side bar, you see all the "Analyses" tabs, visible on the top bar on the left. By default, the user directly access the "Reports" tab, but can navigate to the other tabs as well.
+From the Analyses
section, users can access the following tabs:
Reports
: See Reports as a sort of containers to detail and structure what is contained on a specific report, either from a source or write by yourself. Think of it as an Intelligence Production in OpenCTI.Groupings
: Groupings are containers, like Reports, but do not represent an Intelligence Production. They regroup Objects sharing an explicit context. For example, a Grouping might represent a set of data that, in time, given sufficient analysis, would mature to convey an incident or threat report as Report container.Malware Analyses
: As define by STIX 2.1 standard, Malware Analyses captures the metadata and results of a particular static or dynamic analysis performed on a malware instance or family.Notes
: Through this tab, you can find all the Notes that have been written in the platform, for example to add some analyst's unstructured knowledge about an Object.External references
: Intelligence is never created from nothing. External references give user a way to link sources or reference documents to any Object in the platform.Reports are one of the central component of the platform. It is from a Report
that knowledge is extracted and integrated in the platform for further navigation, analyses and exports. Always tying the information back to a report allows for the user to be able to identify the source of any piece of information in the platform at all time.
In the MITRE STIX 2.1 documentation, a Report
is defined as such :
++Reports are collections of threat intelligence focused on one or more topics, such as a description of a threat actor, malware, or attack technique, including context and related details. They are used to group related threat intelligence together so that it can be published as a comprehensive cyber threat story.
+
As a result, a Report
object in OpenCTI is a set of attributes and metadata defining and describing a document outside the platform, which can be a threat intelligence report from a security reseearch team, a blog post, a press article a video, a conference extract, a MISP event, or any type of document and source.
When clicking on the Reports tab at the top left, you see the list of all the Reports you have access to, in respect with your allowed marking definitions. You can then search and filter on some common and specific attributes of reports.
+When clicking on a Report, you land on the Overview tab. For a Report, the following tabs are accessible:
+Exploring and modifying the structured Knowledge contained in a Report can be done through different lenses.
+In Graph view, STIX SDO are displayed as graph nodes and relationships as graph links. Nodes are colored depending of their type. Direct relationship are displayed as plain link and inferred relationships in dotted link. +At the top right, you will find a serie of icons. From there you can change the current type of view. Here you can also perform global action on the Knowledge of the Report. Let's highlight 2 of them: +- Suggestions: This tool suggests you some logical relationships to add between your contained Object to give more consistency to your Knowledge. +- Share with an Organization: if you have designated a main Organization in the platform settings, you can here share your Report and its content with users of an other Organization. +At the bottom, you have many option to manipulate the graph: +- Multiple option for shaping the graph and applying forces to the nodes and links +- Multiple selection options +- Multiple filters, including a time range selector allowing you to see the evolution of the Knowledge within the Report. +- Multiple creation and edition tools to modify the Knowledge contained in the Report.
+Through this view, you can map exsisting or new Objects directly from a readable content, allowing you to quickly append structured Knowledge in your Report before refining it with relationships and details. +This view is a great place to see the continuum between unstructured and structured Knowledge of a specific Intelligence Production.
+This view allows you to see the structured Knowledge chronologically. This view is really useful when the report describes an attack or a campaign that lasted some time, and the analyst payed attention to the dates. +The view can be filtered and displayed relationships too.
+The correlation view is a great way to visualize and find other Reports related to your current subject of interest. This graph displays all Report related to the important nodes contained in your current Report, for example Objects like Malware or Intrusion sets.
+If your Report describes let's say an attack, a campaign, or an understanding of an Intrusion set, it should contains multiple attack patterns Objects to structure the Knowledge about the TTPs of the Threat Actor. Those attack patterns can be displayed as highlighted matrices, by default the MITRE ATT&CK Enterprise matrix. As some matrices can be huge, it can be also filtered to only display attack patterns describes in the Report.
+Groupings are an alternative to Report for grouping Objects sharing a context without describing an Intelligence Production.
+In the MITRE STIX 2.1 documentation, a Grouping
is defined as such :
++A Grouping object explicitly asserts that the referenced STIX Objects have a shared context, unlike a STIX Bundle (which explicitly conveys no context). A Grouping object should not be confused with an intelligence product, which should be conveyed via a STIX Report. A STIX Grouping object might represent a set of data that, in time, given sufficient analysis, would mature to convey an incident or threat report as a STIX Report object. For example, a Grouping could be used to characterize an ongoing investigation into a security event or incident. A Grouping object could also be used to assert that the referenced STIX Objects are related to an ongoing analysis process, such as when a threat analyst is collaborating with others in their trust community to examine a series of Campaigns and Indicators.
+
When clicking on the Groupings tab at the top of the interface, you see the list of all the Groupings you have access to, in respect with your allowed marking definitions. You can then search and filter on some common and specific attributes of the groupings.
+Clicking on a Grouping, you land on its Overview tab. For a Groupings, the following tabs are accessible: +- Overview: as described here. +- Knowledge: a complex tab that regroups all the structured Knowledge contained in the groupings, as for a Report, except for the Timeline view. As described here. +- Entities: A table containing all SDO (Stix Domain Objects) contained in the Grouping, with search and filters available. It also display if the SDO has been added directly or through inferences with the reasonging engine +- Observables: A table containing all SCO (Stix Cyber Observable) contained in the Grouping, with search and filters available. It also display if the SDO has been added directly or through inferences with the reasonging engine +- Data: as described here.
+Malware analyses are an important part of the Cyber Threat Intelligence, allowing an precise understanding of what and how a malware really do on the host but also how and from where it receives its command and communicates its results.
+In OpenCTI, Malware Analyses can be created from enrichment connectors that will take an Observable as input and perform a scan on a online service platform to bring back results. As such, Malware Analyses can be done on File, Domain and URL.
+In the MITRE STIX 2.1 documentation, a Malware Analyses
is defined as such :
++Malware Analyses captures the metadata and results of a particular static or dynamic analysis performed on a malware instance or family.
+
When clicking on the Malware Analyses tab at the top of the interface, you see the list of all the Malware Analyses you have access to, in respect with your allowed marking definitions. You can then search and filter on some common and specific attributes of the Malware Analyses.
+Clicking on a Malware Analyses, you land on its Overview tab. The following tabs are accessible: +- Overview: This view contains some additions from the common Overview here. You will find here details about how the analysis have been performed, what is the global result regarding the malicioussness of the analysed artifact and all the Observables that have been found during the analysis. +- Knowledge: If you Malware analysis is linked to other Objects that are not part of the analysis result, they will be displayed here. As described here. +- Data: as described here. +- History: as described here.
+ +Not every Knowledge can be structured. For allowing any users to share their insights about a specific Knowledge, they can create a Note for every Object and relationship in OpenCTI they can access to. All the Notes are listed within the Analyses menu for allowing global review of this unstructured addition to the global Knowledge.
+In the MITRE STIX 2.1 documentation, a Note
is defined as such :
++A Note is intended to convey informative text to provide further context and/or to provide additional analysis not contained in the STIX Objects, Marking Definition objects, or Language Content objects which the Note relates to. Notes can be created by anyone (not just the original object creator).
+
Clicking on a Note, you land on its Overview tab. The following tabs are accessible: +- Overview: as described here. +- Data: as described here. +- History: as described here.
+Intelligence is never created from nothing. External references give user a way to link sources or reference documents to any Object in the platform. All external references are listed within the Analyses menu for accessing directly sources of the structured Knowledge.
+In the MITRE STIX 2.1 documentation, a External references
is defined as such :
++External references are used to describe pointers to information represented outside of STIX. For example, a Malware object could use an external reference to indicate an ID for that malware in an external database or a report could use references to represent source material.
+
Clicking on an External reference, you land on its Overview tab. The following tabs are accessible: +- Overview: as described here.
+ + + + + + + + + + + + + + + + + + +When you click on "Arsenal" in the left-side bar, you access all the "Arsenal" tabs, visible on the top bar on the left. By default, the user directly access the "Malware" tab, but can navigate to the other tabs as well.
+From the Arsenal
section, users can access the following tabs:
Malware
: Malware
represents any piece of code specifically designed to damage, disrupt, or gain unauthorized access to computer systems, networks, or user data.Channels
: Channels
, in the context of cybersecurity, refer to places or means through which actors disseminate information. This category is used in particular in the context of FIMI (Foreign Information Manipulation Interference). Tools
: Tools
represent legitimate, installed software or hardware applications on an operating system that can be misused by attackers for malicious purposes. (e.g. LOLBAS).Vulnerabilities
: Vulnerabilities
are weaknesses or that can be exploited by attackers to compromise the security, integrity, or availability of a computer system or network.Malware encompasses a broad category of malicious pieces of code built, deployed, and operated by intrusion set. Malware can take many forms, including viruses, worms, Trojans, ransomware, spyware, and more. These entities are created by individuals or groups, including state-nations, state-sponsored groups, corporations, or hacktivist collectives.
+Use the Malware
SDO to model and track these threats comprehensively, facilitating in-depth analysis, response, and correlation with other security data.
When clicking on the Malware tab on the top left, you see the list of all the Malware you have access to, in respect with your allowed marking definitions. These malware are displayed as Cards where you can find a summary of the important Knowledge associated with each of them: description, aliases, related intrusion sets, countries and sectors they target, and labels. You can then search and filter on some common and specific attributes of Malware.
+At the top right of each Card, you can click the star icon to put it as favorite. It will pin the card on top of the list. You will also be able to display all your favorite easily in your Custom Dashboards.
+ +When clicking on an Malware
card you land on its Overview tab. For a Malware, the following tabs are accessible:
Channels
- such as forums, websites and social media platforms (e.g. Twitter, Telegram) - are mediums for disseminating news, knowledge, and messages to a broad audience. While they offer benefits like open communication and outreach, they can also be leveraged for nefarious purposes, such as spreading misinformation, coordinating cyberattacks, or promoting illegal activities.
Monitoring and managing content within Channels
aids in analyzing threats, activities, and indicators associated with various threat actors, campaigns, and intrusion sets.
When clicking on the Channels tab at the top left, you see the list of all the Channels you have access to, in respect with your allowed marking definitions. These channels are displayed in a list where you can find certain fields characterizing the entity: type of channel, labels, and dates. You can then search and filter on some common and specific attributes of Channels.
+ +When clicking on a Channel
in the list, you land on its Overview tab. For a Channel, the following tabs are accessible:
Tools
refers to legitimate, pre-installed software applications, command-line utilities, or scripts that are present on a compromised system. These objects enable you to model and monitor the activities of these tools, which can be misused by attackers.
When clicking on the Tools
tab at the top left, you see the list of all the Tools
you have access to, in respect with your allowed marking definitions. These tools are displayed in a list where you can find certain fields characterizing the entity: labels and dates. You can then search and filter on some common and specific attributes of Tools.
When clicking on a Tool
in the list, you land on its Overview tab. For a Tool, the following tabs are accessible:
Vulnerabilities
represent weaknesses or flaws in software, hardware, configurations, or systems that can be exploited by malicious actors. This object assists in managing and tracking the organization's security posture by identifying areas that require attention and remediation, while also providing insights into associated intrusion sets, malware and campaigns where relevant.
When clicking on the Vulnerabilities
tab at the top left, you see the list of all the Vulnerabilities
you have access to, in respect with your allowed marking definitions. These vulnerabilities are displayed in a list where you can find certain fields characterizing the entity: CVSS3 severity, labels, dates and creators (in the platform). You can then search and filter on some common and specific attributes of Vulnerabilities.
When clicking on a Vulnerabilities
in the list, you land on its Overview tab. For a Vulnerability, the following tabs are accessible:
When you click on "Cases" in the left-side bar, you access all the "Cases" tabs, visible on the top bar on the left. By default, the user directly access the "Incident Responses" tab, but can navigate to the other tabs as well.
+As Analyses, Cases
can contain other objects. This way, by adding context and results of your investigations in the case, you will be able to get an up-to-date overview of the ongoing situation, and later produce more easily an incident report.
From the Cases
section, users can access the following tabs:
Incident Responses
: This type of Cases is dedicated to the management of incidents. An Incident Response case does not represent an incident, but all the context and actions that will encompass the response to a specific incident.Request for Information
: CTI teams are often asked to provide extensive information and analysis on a specific subject, be it related to an ongoing incident or a particular trending threat. Request for Information cases allow you to store context and actions relative to this type of request and its response.Request for Takedown
: When an organization is targeted by an attack campaign, a typical response action can be to request the Takedown of elements of the attack infrastructure, for example a domain name impersonating the organization to phish its employees, or an email address used to deliver phishing content. As Takedown needs in most case to reach out to external providers and be effective quickly, it often needs specific workflows. Request for Takedown cases give you a dedicated space to manage these specific actions.Tasks
: In every case, you need tasks to be performed in order to solve it. The Tasks tab allows you to review all created tasks to quickly see past due date, or quickly see every task assigned to a specific user.Feedbacks
: If you use your platform to interact with other teams and provide them CTI Knowledge, some users may want to give you feedback about it. Those feedbacks can easily be considered as another type of case to solve, as it will often refer to Knowledge inconsistency or gaps.Incident responses, Request for Information & Request for Takedown cases are an important part of the case management system in OpenCTI. Here, you can organize the work of your team to respond to cybersecurity situations. You can also give context to the team and other users on the platform about the situation and actions (to be) taken.
+To manage the situation, you can issue Tasks
and assign them to users in the platform, by directly creating a Task or by applying a Case template that will append a list of predefined tasks.
To bring context, you can use your Case as a container (like Reports or Groupings), allowing you to add any Knowledge from your platform in it. You can also use this possibility to trace your investigation, your Case playing the role of an Incident report. You will find more information about case management here.
+Incident Response, Request for Information & Request for Takedown are not STIX 2.1 Objects.
+When clicking on the Incident Response, Request for Information & Request for Takedown tabs at the top, you see the list of all the Cases you have access to, in respect with your allowed marking definitions. You can then search and filter on some common and specific attributes.
+When clicking on an Incident Response, Request for Information or Request for Takedown, you land on the Overview tab. The following tabs are accessible:
+Exploring and modifying the structured Knowledge contained in a Case can be done through different lenses.
+In Graph view, STIX SDO are displayed as graph nodes and relationships as graph links. Nodes are colored depending on their type. Direct relationship are displayed as plain link and inferred relationships in dotted link. +At the top right, you will find a series of icons. From there you can change the current type of view. Here you can also perform global action on the Knowledge of the Case. Let's highlight 2 of them:
+Through this view, you can map existing or new Objects directly from a readable content, allowing you to quickly append structured Knowledge in your Case before refining it with relationships and details. +This view is a great place to see the continuum between unstructured and structured Knowledge.
+This view allows you to see the structured Knowledge chronologically. This view is particularly useful in the context of a Case, allowing you to see the chain of events, either from the attack perspectives, the defense perspectives or both. +The view can be filtered and displayed relationships too.
+If your Case contains attack patterns, you will be able to visualize them in a Matrix view.
+Tasks are actions to be performed in the context of a Case (Incident Response, Request for Information, Request for Takedown). Usually, a task is assigned to a user, but important tasks may involve more participants.
+When clicking on the Tasks tab at the top of the interface, you see the list of all the Tasks you have access to, in respect with your allowed marking definitions. You can then search and filter on some common and specific attributes of the tasks.
+Clicking on a Task, you land on its Overview tab. For a Tasks, the following tabs are accessible: +- Overview: as described here. +- Data: as described here. +- History: as described here.
+When a user fill a feedback form from its Profile/Feedback menu, it will then be accessible here.
+This feature gives the opportunity to engage with other users of your platform and to respond directly to their concern about it or the Knowledge, without the need of third party software.
+ +Clicking on a Feedback, you land on its Overview tab. For a Feedback, the following tabs are accessible: +- Overview: as described here. +- Content: as described here. +- Data: as described here. +- History: as described here.
+ + + + + + + + + + + + + + + + + + +OpenCTI's Entities objects provides a comprehensive framework for modeling various targets and attack victims within your threat intelligence data. With five distinct Entity object types, you can represent sectors, events, organizations, systems, and individuals. This robust classification empowers you to contextualize threats effectively, enhancing the depth and precision of your analysis.
+When you click on "Entities" in the left-side bar, you access all the "Entities" tabs, visible on the top bar on the left. By default, the user directly access the "Sectors" tab, but can navigate to the other tabs as well.
+From the Entities
section, users can access the following tabs:
Sectors
: areas of activity.Events
: event in the real world.Organizations
: groups with specific aims such as companies and government entities.Systems
: technologies such as platforms and software.Individuals
: real persons.Sectors represent specific domains of activity, defining areas such as energy, government, health, finance, and more. Utilize sectors to categorize targeted industries or sectors of interest, providing valuable context for threat intelligence analysis within distinct areas of the economy.
+When clicking on the Sectors tab at the top left, you see the list of all the Sectors you have access to, in respect with your allowed marking definitions.
+ +When clicking on a Sector
in the list, you land on its Overview tab. For a Sector, the following tabs are accessible:
Sightings
relationships corresponding to events in which an Indicator
(IP, domain name, url, etc.) is sighted in the Sector.Events encompass occurrences like international sports events, summits (e.g., G20), trials, conferences, or any significant happening in the real world. By modeling events, you can analyze threats associated with specific occurrences, allowing for targeted investigations surrounding high-profile incidents.
+When clicking on the Events tab at the top left, you see the list of all the Events you have access to, in respect with your allowed marking definitions.
+ +When clicking on an Event
in the list, you land on its Overview tab. For an Event, the following tabs are accessible:
Sightings
relationships corresponding to events in which an Indicator
(IP, domain name, url, etc.) is sighted during an attack against the Event.Organizations include diverse entities such as companies, government bodies, associations, non-profits, and other groups with specific aims. Modeling organizations enables you to understand the threat landscape concerning various entities, facilitating investigations into cyber-espionage, data breaches, or other malicious activities targeting specific groups.
+When clicking on the Organizations tab at the top left, you see the list of all the Organizations you have access to, in respect with your allowed marking definitions.
+ +When clicking on an Organization
in the list, you land on its Overview tab. For an Organization, the following tabs are accessible:
Sightings
relationships corresponding to events in which an Indicator
(IP, domain name, url, etc.) is sighted in the Organization.Furthermore, an Organization can be observed from an "Author" perspective. It is possible to change this viewpoint to the right of the entity name, using the "Display as" drop-down menu (see screenshot below). This different perspective is accessible in the Overview, Knowledge and Analyses tabs. When switched to "Author" mode, the observed data pertains to the entity's description as an author within the platform:
+Report
, Groupings
) and Cases (Incident response
, Request for Information
, Request for Takedown
) for which the Organization is the author.Systems represent software applications, platforms, frameworks, or specific tools like WordPress, VirtualBox, Firefox, Python, etc. Modeling systems allows you to focus on threats related to specific software or technology, aiding in vulnerability assessments, patch management, and securing critical applications.
+When clicking on the Systems tab at the top left, you see the list of all the Systems you have access to, in respect with your allowed marking definitions.
+ +When clicking on a System
in the list, you land on its Overview tab. For a System, the following tabs are accessible:
Sightings
relationships corresponding to events in which an Indicator
(IP, domain name, url, etc.) is sighted in the System.Furthermore, a System can be observed from an "Author" perspective. It is possible to change this viewpoint to the right of the entity name, using the "Display as" drop-down menu (see screenshot below). This different perspective is accessible in the Overview, Knowledge and Analyses tabs. When switched to "Author" mode, the observed data pertains to the entity's description as an author within the platform:
+Report
, Groupings
) and Cases (Incident response
, Request for Information
, Request for Takedown
) for which the System is the author.Individuals represent specific persons relevant to your threat intelligence analysis. This category includes targeted individuals, or influential figures in various fields. Modeling individuals enables you to analyze threats related to specific people, enhancing investigations into cyber-stalking, impersonation, or other targeted attacks.
+When clicking on the Individuals tab at the top left, you see the list of all the Individuals you have access to, in respect with your allowed marking definitions.
+ +When clicking on an Individual
in the list, you land on its Overview tab. For an Individual, the following tabs are accessible:
Sightings
relationships corresponding to events in which an Indicator
(IP, domain name, url, etc.) is sighted in the Individual.Furthermore, an Individual can be observed from an "Author" perspective. It is possible to change this viewpoint to the right of the entity name, using the "Display as" drop-down menu (see screenshot below). This different perspective is accessible in the Overview, Knowledge and Analyses tabs. When switched to "Author" mode, the observed data pertains to the entity's description as an author within the platform:
+Report
, Groupings
) and Cases (Incident response
, Request for Information
, Request for Takedown
) for which the Individual is the author.When you click on "Events" in the left-side bar, you access all the "Events" tabs, visible on the top bar on the left. By default, the user directly access the "Incidents" tab, but can navigate to the other tabs as well.
+From the Events
section, users can access the following tabs:
Incidents
: In OpenCTI, Incidents
correspond to a negative event happening on an information system. This can include a cyberattack (intrusion, phishing, etc.), a consolidated security alert generated by a SIEM or EDR that need to be qualified, and so on. It can also refer to an information warfare attack in the context of countering disinformation.Sightings
: Sightings
correspond to the event in which an Observable
(IP, domain name, certificate, etc.) is detected by or within an information system, an individual or an organization. Most often, this corresponds to a security event transmitted by a SIEM or an EDR.Observed Data
: Observed Data
has been added in OpenCTI by compliance with the STIX 2.1 standard. You can see it has a pseudo-container that contains Observables, like a line of firewall log for example. Currently, it is rarely used.Incidents usually represents negative events impacting resources you want to protect, but local definitions can vary a lot, from a simple security events send by a SIEM to a massive scale supply chain attack impacting a whole activity sector.
+In the MITRE STIX 2.1, the Incident
SDO has not yet been finalize and is the object of important work as part of a forthcoming STIX Extension.
When clicking on the Incidents tab at the top left, you see the list of all the Incidents you have access to, in respect with your allowed marking definitions.
+ +When clicking on an Incident
in the list, you land on its Overview tab. For an Incident, the following tabs are accessible:
The Sightings
correspond to events in which an Observable
(IP, domain name, url, etc.) is detected by or within an information system, an individual or an organization. Most often, this corresponds to a security event transmitted by a SIEM or EDR.
In OpenCTI, as we are in a cybersecurity context, Sightings
are associated with Indicators
of Compromise (IoC) and the notion of "True positive" and "False positive".
It is important to note that Sightings are a type of relationship (not a STIX SDO or STIX SCO), between an Observable and an Entities or Locations.
+When clicking on the Sightings tab at the top left, you see the list of all the Sightings you have access to, in respect with your allowed marking definitions.
+ +When clicking on a Sighting
in the list, you land on its Overview tab. As other relationships in the platform, Sighting's overview displays common related metadata, containers, external references, notes and entities linked by the relationship.
In addition, this overview displays: +- Qualification : if the Sighting is a True Positive or a False Positive +- Count : number of times the event has been seen
+Observed Data
correspond to an extract from a log that contains Observables.
In the MITRE STIX 2.1, the Observed Data
SDO is defined as such:
++Observed Data conveys information about cybersecurity related entities such as files, systems, and networks using the STIX Cyber-observable Objects (SCOs). For example, Observed Data can capture information about an IP address, a network connection, a file, or a registry key. Observed Data is not an intelligence assertion, it is simply the raw information without any context for what it means.
+
When clicking on the Observed Data
tab at the top left, you see the list of all the Observed Data
you have access to, in respect with your allowed marking definitions.
When clicking on an Observed Data
in the list, you land on its Overview tab. The following tabs are accessible:
OpenCTI's Locations objects provides a comprehensive framework for representing various geographic entities within your threat intelligence data. With five distinct Location object types, you can precisely define regions, countries, areas, cities, and specific positions. This robust classification empowers you to contextualize threats geographically, enhancing the depth and accuracy of your analysis.
+When you click on "Locations" in the left-side bar, you access all the "Locations" tabs, visible on the top bar on the left. By default, the user directly access the "Regions" tab, but can navigate to the other tabs as well.
+From the Locations
section, users can access the following tabs:
Regions
: very large geographical territories, such as a continent.Countries
: the world's countries.Areas
: more or less extensive geographical areas and often not having a very defined limitCities
: the world's cities.Positions
: very precise positions on the globe.Regions encapsulate broader geographical territories, often representing continents or significant parts of continents. Examples include EMEA (Europe, Middle East, and Africa), Asia, Western Europe, and North America. Utilize regions to categorize large geopolitical areas and gain macro-level insights into threat patterns.
+When clicking on the Regions tab at the top left, you see the list of all the Regions you have access to, in respect with your allowed marking definitions.
+ +When clicking on a Region
in the list, you land on its Overview tab. For a Region, the following tabs are accessible:
Details
section but a map locating the Region.Sightings
relationships corresponding to events in which an Indicator
(IP, domain name, url, etc.) is sighted in a Region.Countries represent individual nations across the world. With this object type, you can specify detailed information about a particular country, enabling precise localization of threat intelligence data. Countries are fundamental entities in geopolitical analysis, offering a focused view of activities within national borders.
+When clicking on the Countries tab at the top left, you see the list of all the Countries you have access to, in respect with your allowed marking definitions.
+ +When clicking on a Country
in the list, you land on its Overview tab. For a Country, the following tabs are accessible:
Details
section but a map locating the Country.Sightings
relationships corresponding to events in which an Indicator
(IP, domain name, url, etc.) is sighted in a Country.Areas define specific geographical regions of interest, such as the Persian Gulf, the Balkans, or the Caucasus. Use areas to identify unique zones with distinct geopolitical, cultural, or strategic significance. This object type facilitates nuanced analysis of threats within defined geographic contexts.
+When clicking on the Areas tab at the top left, you see the list of all the Areas you have access to, in respect with your allowed marking definitions.
+ +When clicking on an Area
in the list, you land on its Overview tab. For an Area, the following tabs are accessible:
Details
section but a map locating the Area.Sightings
relationships corresponding to events in which an Indicator
(IP, domain name, url, etc.) is sighted in an Area.Cities provide granular information about urban centers worldwide. From major metropolises to smaller towns, cities are crucial in understanding localized threat activities. With this object type, you can pinpoint threats at the urban level, aiding in tactical threat assessments and response planning.
+When clicking on the Cities tab at the top left, you see the list of all the Cities you have access to, in respect with your allowed marking definitions.
+ +When clicking on a City
in the list, you land on its Overview tab. For a City, the following tabs are accessible:
Details
section but a map locating the City.Sightings
relationships corresponding to events in which an Indicator
(IP, domain name, url, etc.) is sighted in a City.Positions represent highly precise geographical points, such as monuments, buildings, or specific event locations. This object type allows you to define exact coordinates, enabling accurate mapping of events or incidents. Positions enhance the granularity of your threat intelligence data, facilitating precise geospatial analysis.
+When clicking on the Positions tab at the top left, you see the list of all the Positions you have access to, in respect with your allowed marking definitions.
+ +When clicking on a Position
in the list, you land on its Overview tab. For a Position, the following tabs are accessible:
Sightings
relationships corresponding to events in which an Indicator
(IP, domain name, url, etc.) is sighted at a Position.Under construction
+We are doing our best to complete this page. +If you want to participate, don't hesitate to join the Filigran Community on Slack +or submit your pull request on the Github doc repository.
+When you click on "Techniques" in the left-side bar, you access all the "Techniques" tabs, visible on the top bar on the left. By default, the user directly access the "Attack pattern" tab, but can navigate to the other tabs as well.
+From the Techniques
section, users can access the following tabs:
Attack pattern
: attacks pattern used by the threat actors to perform their attacks. By default, OpenCTI is provisionned with attack patterns from MITRE ATT&CK matrices (for CTI) and DISARM matrix (for FIMI).Narratives
: In OpenCTI, narratives used by threat actors can be represented and linked to other Objects. Narratives are mainly used in the context of disinformation campaigns where it is important to trace which narratives have been and are still used by threat actors.Courses of action
: A Course of Action is an action taken either to prevent an attack or to respond to an attack that is in progress. It may describe technical, automatable responses (applying patches, reconfiguring firewalls) but can also describe higher level actions like employee training or policy changes. For example, a course of action to mitigate a vulnerability could describe applying the patch that fixes it.Data sources
: Data sources represent the various subjects/topics of information that can be collected by sensors/logs. Data sources also include data components, Data components
: Data components identify specific properties/values of a data source relevant to detecting a given ATT&CK technique or sub-technique.Attacks pattern used by the threat actors to perform their attacks. By default, OpenCTI is provisionned with attack patterns from MITRE ATT&CK matrices and CAPEC (for CTI) and DISARM matrix (for FIMI).
+In the MITRE STIX 2.1 documentation, an Attack pattern
is defined as such :
++Attack Patterns are a type of TTP that describe ways that adversaries attempt to compromise targets. Attack Patterns are used to help categorize attacks, generalize specific attacks to the patterns that they follow, and provide detailed information about how attacks are performed. An example of an attack pattern is "spear phishing": a common type of attack where an attacker sends a carefully crafted e-mail message to a party with the intent of getting them to click a link or open an attachment to deliver malware. Attack Patterns can also be more specific; spear phishing as practiced by a particular threat actor (e.g., they might generally say that the target won a contest) can also be an Attack Pattern.
+
When clicking on the Attack pattern tab at the top left, you access the list of all the attack pattern you have access too, in respect with your allowed marking definitions. You can then search and filter on some common and specific attributes of attack patterns.
+When clicking on an Attack pattern, you land on its Overview tab. For an Attack pattern, the following tabs are accessible:
+Overview: Overview of Attack pattern is a bit different as the usual described here. The "Details" box is more structured and contains information about:
+parent or subtechniques (as in the MITRE ATT&CK matrices),
+In OpenCTI, narratives used by threat actors can be represented and linked to other Objects. Narratives are mainly used in the context of disinformation campaigns where it is important to trace which narratives have been and are still used by threat actors.
+An example of Narrative can be "The country A is weak and corrupted" or "The ongoing operation aims to free people".
+Narrative can be a mean in the context of a more broad attack or the goal of the operation, a vision to impose.
+When clicking on the Narrative tab at the top left, you access the list of all the Narratives you have access too, in respect with your allowed marking definitions. You can then search and filter on some common and specific attributes of narratives.
+When clicking on a Narrative, you land on its Overview tab. For a Narrative, the following tabs are accessible:
+In the MITRE STIX 2.1 documentation, an Course of action
is defined as such :
++A Course of Action is an action taken either to prevent an attack or to respond to an attack that is in progress. It may describe technical, automatable responses (applying patches, reconfiguring firewalls) but can also describe higher level actions like employee training or policy changes. For example, a course of action to mitigate a vulnerability could describe applying the patch that fixes it.
+
When clicking on the Courses of action
tab at the top left, you access the list of all the Courses of action you have access too, in respect with your allowed marking definitions. You can then search and filter on some common and specific attributes of course of action.
When clicking on a Course of Action
, you land on its Overview tab. For a Course of action, the following tabs are accessible:
In the MITRE ATT&CK documentation, Data sources
are defined as such :
++Data sources represent the various subjects/topics of information that can be collected by sensors/logs. Data sources also include data components, which identify specific properties/values of a data source relevant to detecting a given ATT&CK technique or sub-technique.
+
When clicking on a Data source
or a Data component
, you land on its Overview tab. For a Course of action, the following tabs are accessible:
When you click on "Threats" in the left-side bar, you access all the "Threats" tabs, visible on the top bar on the left. By default, the user directly access the "Threat Actor (Group)" tab, but can navigate to the other tabs as well.
+From the Threats
section, users can access the following tabs:
Threat actors (Group)
: Threat actor (Group) represents a physical group of attackers operating an Intrusion set, using malware and attack infrastructure, etc.Threat actors (Indvidual)
: Threat actor (Individual) represents a real attacker that can be described by physical and personal attributes and motivations. Threat actor (Individual) operates Intrusion set, uses malware and infrastructure, etc.Intrusion sets
: Intrusion set is an important concept in Cyber Threat Intelligence field. It is a consistent set of technical and non-technical elements corresponding of what, how and why a Threat actor acts. it is particularly useful for associating multiple attacks and malicious actions to a defined Threat, even without sufficient information regarding who did them. Often, with you understanding of the threat growing, you will link an Intrusion set to a Threat actor (either a Group or an Individual).Campaigns
: Campaign represents a series of attacks taking place in a certain period of time and/or targeting a consistent subset of Organization/Individual.Threat actors are the humans who are building, deploying and operating intrusion sets. A threat actor can be an single individual or a group of attackers (who may be composed of individuals). A group of attackers may be a state-nation, a state-sponsored group, a corporation, a group of hacktivists, etc.
+Beware, groups of attackers might be modelled as "Intrusion sets" in feeds, as there is sometimes a misunderstanding in the industry between group of people and the technical/operational intrusion set they operate.
+ +When clicking on the Threat actor (Group or Individual) tabs at the top left, you see the list of all the groups of Threat actors or Individual Threat actors you have access to, in respect with your allowed marking definitions. These groups or individual are displayed as Cards where you can find a summary of the important Knowledge associated with each of them: description, aliases, malware they used, countries and industries they target, labels. You can then search and filter on some common and specific attributes of Threat actors.
+At the top right of each Card, you can click the star icon to put it as favorite. It will pin the card on top of the list. You will also be able to display all your favorite easily in your Custom Dashboards.
+Individual Threat actors have unique properties to represent demographic and biographic information. Currently tracked demographics include their countries of residence, citizenships, date of birth, gender, and more.
+ +Biographic information includes their eye and hair color, as well as known heights and weights.
+ +An Individual Threat actor can also be tracked as employed by an Organization or a Threat Actor group. This relationship can be set under the knowledge tab.
+When clicking on a Threat actor Card, you land on its Overview tab. For a Threat actor, the following tabs are accessible:
+An intrusion set is a consistent group of technical elements such as "tactics, technics and procedures" (TTP), tools, malware and infrastructure used by a threat actor against one or a number of victims who are usually sharing some characteristics (field of activity, country or region) to reach a similar goal whoever the victim is. The intrusion set may be deployed once or several times and may evolve with time. +Several intrusion sets may be linked to one threat actor. All the entities described below may be linked to one intrusion set. There are many debates in the Threat Intelligence community on how to define an intrusion set and how to distinguish several intrusion sets with regards to:
+As OpenCTI is very customizable, each organization or individual may use these categories as they wish. Instead, it is also possible to use the import feed for the choice of categories.
+ +When clicking on the Intrusion set tab on the top left, you see the list of all the Intrusion sets you have access to, in respect with your allowed marking definitions. These intrusion sets are displayed as Cards where you can find a summary of the important Knowledge associated with each of them: description, aliases, malware they used, countries and industries they target, labels. You can then search and filter on some common and specific attributes of Intrusion set.
+At the top right of each Card, you can click the star icon to put it as favorite. It will pin the card on top of the list. You will also be able to display all your favorite easily in your Custom Dashboards.
+When clicking on an Intrusion set Card, you land on its Overview tab. The following tabs are accessible:
+A campaign can be defined as "a series of malicious activities or attacks (sometimes called a "wave of attacks") taking place within a limited period of time, against a defined group of victims, associated to a similar intrusion set and characterized by the use of one or several identical malware towards the various victims and common TTPs". +However, a campaign is an investigation element and may not be widely recognized. Thus, a provider might define a series of attacks as a campaign and another as an intrusion set. +Campaigns can be attributed to an Intrusion set.
+ +When clicking on the Campaign tab on the top left, you see the list of all the Campaigns you have access to, in respect with your allowed marking definitions. These campaigns are displayed as Cards where you can find a summary of the important Knowledge associated with each of them: description, aliases, malware used, countries and industries they target, labels. You can then search and filter on some common and specific attributes of Campaigns.
+At the top right of each Card, you can click the star icon to put it as favorite. It will pin the card on top of the list. You will also be able to display all your favorite easily in your Custom Dashboards.
+When clicking on an Campaign Card, you land on its Overview tab. The following tabs are accessible:
+Under construction
+We are doing our best to complete this page. +If you want to participate, don't hesitate to join the Filigran Community on Slack +or submit your pull request on the Github doc repository.
+Under construction
+We are doing our best to complete this page. +If you want to participate, don't hesitate to join the Filigran Community on Slack +or submit your pull request on the Github doc repository.
+The best way to consume OpenCTI data, whether it is through a stream connector or within another OpenCTI instance, is to use the live streams. Live streams are like TAXII collection (ie. serving STIX 2.1 bundles) but under steroids. This means that live streams are supporting:
+To better understand how live streams are working, let's take a few examples, from simple to complex.
+Given a live stream with filters Entity type: Indicator AND
Label: detection. Let's see what happen with an indicator with:
TLP:GREEN
Crowdstrike
indicates
to the malware Emotet
Action | +Result in stream (resolve-dependencies=false ) |
+Result in stream (resolve-dependencies=true ) |
+
---|---|---|
1. Create an indicator | +Nothing | +Nothing | +
2. Add the label detection |
+Create TLP:GREEN , create CrowdStrike , create the indicator |
+Create TLP:GREEN , create CrowdStrike , create the malware Emotet , create the indicator, create the relationship indicates |
+
3. Remove the label detection |
+Delete the indicator | +Delete the indicator | +
4. Add the label detection |
+Create the indicator | +Create the indicator, create the relationship indicates |
+
5. Delete the indicator | +Delete the indicator | +Delete the indicator | +
OpenCTI has an embedded TAXII API endpoint which provides valid STIX 2.1 bundles. If you wish to know more about the TAXII standard, please read the official introduction.
+In OpenCTI you can create as many TAXII 2.1 collections as needed. Each of them can have specific filters to publish only a subset of the platform overall knowledge (specific types of entities, labels, marking definitions, etc.).
+ +After creating a new collection, every systems with a proper access token can consume the collection using different kinds of authentication (basic, bearer, etc.)
+As when using the GraphQL API, TAXII 2.1 collections have a classic pagination system that should be handled by the consumer. Also, it's important to understand that element dependencies (nested IDs) inside the collection are not always contained/resolved in the bundle, so consistency needs to be handled at the client level.
+OpenCTI is able to publish data in CSV feeds on a rolling period.
+ + + + + + + + + + + + + + + + + + +This guide aims to give you a full overview of the OpenCTI features and workflows. The platform can be used in various contexts to handle threats management use cases from a technical to a more strategic level. OpenCTI has been designed as a knowledge graph, taking inputs (threat intelligence feeds, sightings & alerts, vulnerabilities, assets, artifacts, etc.) and generating outputs based on built-in capabilities and / or connectors.
+Here are some examples of use cases:
+The welcome gives any visitor on the OpenCTI platform an outlook on the live of the platform. It can be replaced by a custom dashboard, created by a user (or the default dashboard in a role, a group or an organization).
+ +Component | +Description | +
---|---|
Total entities | +Number of entities (threat actor , intrusion set , indicator , etc.). |
+
Total relationships | +Number of relationships (targets , uses , indicates , etc.). |
+
Total reports | +Number of reports. | +
Total observables | +Number of observables (IPv4-Addr , File , etc.). |
+
Component | +Description | +
---|---|
Top labels | +Top labels given to entities during the last 3 months. | +
Ingested entities | +Number of entities ingested by month. | +
Top 10 active entities | +List of the entities with the greatest number of relations over the last 3 months. | +
Targeted countries | +Intensity of the targeting tied to the number of relations targets for a given country. |
+
Observable distribution | +Distribution of the number of observables by type. | +
Last ingested reports | +Last reports ingested in the platform. | +
Automated imports in OpenCTI streamline the process of data ingestion, allowing users to effortlessly bring in valuable intelligence from diverse sources. This page focuses on the automated methods of importing data, which serve as bridges between OpenCTI and diverse external systems, formatting it into a STIX bundle, and importing it into the OpenCTI platform.
+Connectors in OpenCTI serve as dynamic gateways, facilitating the import of data from a wide array of sources and systems. Every connector is designed to handle specific data types and structures of the source, allowing OpenCTI to efficiently ingest the data.
+The behavior of each connector is defined by its development, determining the types of data it imports and its configuration options. This flexibility allows users to customize the import process to their specific needs, ensuring a seamless and personalized data integration experience.
+The level of configuration granularity regarding the imported data type varies with each connector. Nevertheless, connectors empower users to specify the date from which they wish to fetch data. This capability is particularly useful during the initial activation of a connector, enabling the retrieval of historical data. Following this, the connector operates in real-time, continuously importing new data from the source.
+OpenCTI's connector ecosystem covers a broad spectrum of sources, enhancing the platform's capability to integrate data from various contexts, from threat intelligence providers to specialized databases. The list of available connectors can be found in our connectors catalog. Connectors are categorized into three types: import connectors (the focus here), enrichment connectors, and stream consumers. Further documentation on connectors is available on the dedicated documentation page.
+In summary, automated imports through connectors empower OpenCTI users with a scalable, efficient, and customizable mechanism for data ingestion, ensuring that the platform remains enriched with the latest and most relevant intelligence.
+In OpenCTI, the "Data > Ingestion" section provides users with built-in functions for automated data import. These functions are designed for specific purposes and can be configured to seamlessly ingest data into the platform. Here, we'll explore the configuration process for the three built-in functions: Live Streams, TAXII Feeds, and RSS Feeds.
+Live Streams enable users to consume data from another OpenCTI platform, fostering collaborative intelligence sharing. Here's a step-by-step guide to configure Live streams synchroniser:
+https://[domain]
; don't include the path).Additional configuration options:
+TAXII Feeds in OpenCTI provide a robust mechanism for ingesting TAXII collections from TAXII servers or other OpenCTI instances. Configuring TAXII ingester involves specifying essential details to seamlessly integrate threat intelligence data. Here's a step-by-step guide to configure TAXII ingesters:
+https://[domain]/taxii2/root
.426e3acb-db50-4118-be7e-648fab67c16c
.Additional configuration options:
+RSS Feeds functionality enables users to seamlessly ingest items in report form from specified RSS feeds. Configuring RSS Feeds involves providing essential details and selecting preferences to tailor the import process. Here's a step-by-step guide to configure RSS ingesters:
+Additional configuration options:
+Users can streamline the data ingestion process using various automated import capabilities. Each method proves beneficial in specific circumstances.
+By leveraging these automated import functionalities, OpenCTI users can build a comprehensive, up-to-date threat intelligence database. The platform's adaptability and user-friendly configuration options ensure that intelligence workflows remain agile, scalable, and tailored to the unique needs of each organization.
+ + + + + + + + + + + + + + + + + + +The platform provides a seamless process for automatically parsing data from various file formats. This capability is facilitated by two distinct mechanisms:
+Currently, there are two connectors designed for importing files and automatically identifying entities.
+ImportFileStix
: Designed to handle STIX-structured files (json or xml format).ImportDocument
: Versatile connector supporting an array of file formats, including pdf, text, html, and markdown.The CSV mapper is a tailored functionality to facilitate the import of data stored in CSV files. For more in-depth information on using CSV mappers, refer to the CSV Mappers documentation page.
+Both mechanisms can be employed wherever file uploads are possible. This includes the "Data" tabs of all entities and the dedicated panel named "Data import and analyst workbenches" located in the top right-hand corner (database logo with a small gear). Importing files from these two locations is not entirely equal; refer to the "Relationship handling from entity's Data tab" section below for details on this matter.
+For ImportDocument
connector, the identification process involves searching for existing entities in the platform and scanning the document for relevant information. In additions, the connector use regular expressions (regex) to detect IP addresses and domains within the document.
As for the ImportFileStix
connector and the CSV mappers, there is no identification mechanism. The imported data will be, respectively, the data defined in the STIX bundle or according to the configuration of the CSV mapper used.
It's essential to note that CSV mappers operate differently from other import mechanisms. Unlike connectors, CSV mappers do not generate workbenches. Instead, the data identified by CSV mappers is imported directly into the platform without an intermediary workbench stage.
+When importing a document directly from an entity's "Data" tab, there can be an automatic addition of relationships between the objects identified by connectors and the entity in focus. The process differs depending on the type of entity in which the import occurs:
+Related to
relationships between the Observables and the entity are automatically added to the workbench and created after validation of this one.Expanding the scope of file imports, users can seamlessly add files in the Content
tab of Analyses or Cases. In this scenario, the file is directly added as an attachment without utilizing an import mechanism.
In order to initiate file imports, users must possess the requisite capability: "Upload knowledge files." This capability ensures that only authorized users can contribute and manage knowledge files within the OpenCTI platform, maintaining a controlled and secure environment for data uploads.
+Deprecation warning
+Using the ImportDocument
connector to parse CSV file is now disallowed as it produces inconsistent results.
+Please configure and use CSV mappers dedicated to your specific CSV content for a reliable parsing.
+CSV mappers can be created and configured in the administration interface.
OpenCTI enforces strict rules to determine the period during which an indicator is effective for detection. This period is defined by the valid_from
and valid_until
dates. In the future, all along this life, the indicator score
will decrease according to a customizable algorithm.
After the indicator fully expires, the object is marked as revoked
and the detection
field is automatically set to false
. Here, we outline how these dates are calculated within the OpenCTI platform. This documentation will be enhanced also for the score impact.
If a data source provides valid_from
and valid_until
dates when creating an indicator on the platform, these dates are used without modification.
If a data source does not provide validity dates, OpenCTI applies specific rules to determine these dates based on the "main observable type" of indicator and its associated markings.
+Indicator type | +Marking | +TTL (in days) | +
---|---|---|
IPv4-Addr and IPv6-Addr | +TLP:CLEAR to TLP:AMBER |
+30 | +
IPv4-Addr and IPv6-Addr | +TLP:AMBER+STRICT and TLP:RED |
+60 | +
IPv4-Addr and IPv6-Addr | +Others | +60 | +
URL | +TLP:CLEAR to TLP:GREEN |
+60 | +
URL | +TLP:AMBER to TLP:RED |
+180 | +
URL | +Others | +180 | +
Others (e.g. Domain-Name, File, YARA) | +All | +365 | +
The TTL represents the duration for which an indicator is considered valid - i.e. here, the number of days between valid_from
and valid_until
. After this period, the indicator is marked as revoked.
If a URL indicator with TLP:AMBER
marking is created without specific validity dates, it will be considered valid for 180 days from its valid_from
date. After 180 days, the valid_until
date will be reach and the indicator will be automatically revoked.
Understanding how OpenCTI calculates validity periods is essential for effective threat intelligence analysis. These rules ensure that your indicators are accurate and up-to-date, providing a reliable foundation for threat intelligence data.
+ + + + + + + + + + + + + + + + + + +OpenCTI’s inferences and reasoning capability is a robust engine that automates the process of relationship creation within your threat intelligence data. This capability, situated at the core of OpenCTI, allows logical rules to be applied to existing relationships, resulting in the automatic generation of new, pertinent connections.
+Inferences and reasoning serve as OpenCTI’s intelligent engine. It interprets your data logically. By activating specific predefined rules (of which there are around twenty), OpenCTI can deduce new relationships from the existing ones. For instance, if there's a connection indicating an Intrusion Set targets a specific country, and another relationship stating that this country is part of a larger region, OpenCTI can automatically infer that the Intrusion Set also targets the broader region.
+When you activate an inference rule, OpenCTI continuously analyzes your existing relationships and applies the defined logical rules. These rules are logical statements that define conditions for new relationships. When the set of conditions is met, the OpenCTI creates the corresponding relationship automatically.
+For example, if you activate a rule as follows:
+IF [Entity A targets Identity B] AND [Identity B is part of Identity C] +THEN [Entity A targets Identity C]
+OpenCTI will apply this rule to existing data. If it finds an Intrusion Set ("Entity A") targeting a specific country ("Identity B") and that country is part of a larger region ("Identity C"), the platform will automatically establish a relationship between the Intrusion Set and the region.
+In the knowledge graphs: Inferred relationships are represented by dotted lines of a different color, distinguishing them from non-inferred relations.
+ +In the lists: In a relationship list, a magic wand icon at the end of the line indicates relationship created by inference.
+ +Manual data creation in OpenCTI is an intuitive process that occurs throughout the platform. This page provides guidance on two key aspects of manual creation: Entity creation and Relationship creation.
+To create an entity:
+Before delving into the creation of relationships between objects in OpenCTI, it's crucial to grasp some foundational concepts. Here are key points to understand:
+Now, let’s explore the process of creating relationships. To do this, we will differentiate the case of containers from the others.
+When it comes to creating relationships within containers in OpenCTI, the process is straightforward. Follow these steps to attach objects to a container:
+When creating relationships not involving a container, the creation method is distinct. Follow these steps to create relationships between entities:
+While the aforementioned methods are primary for creating entities and relationships, OpenCTI offers versatility, allowing users to create objects in various locations within the platform. Here's a non-exhaustive list of additional places that facilitate on-the-fly creation:
+These supplementary methods offer users flexibility and convenience, allowing them to adapt their workflow to various contexts within the OpenCTI platform. As users explore the platform, they will naturally discover additional means of creating entities and relationships.
+ + + + + + + + + + + + + + + + + + +OpenCTI’s merge capability stands as a pivotal tool for optimizing threat intelligence data, allowing to consolidate multiple entities of the same type. This mechanism serves as a powerful cleanup tool, harmonizing the platform and unifying scattered information. In this section, we explore the significance of this feature, the process of merging entities, and the strategic considerations involved.
+In the ever-expanding landscape of threat intelligence and the multitude of names chosen by different data sources, data cleanliness is essential. Duplicates and fragmented information hinder efficient analysis. The merge capability is a strategic solution for amalgamating related entities into a cohesive unit. Central to the merging process is the selection of a main entity. This primary entity becomes the anchor, retaining crucial attributes such as name and description. Other entities, while losing specific fields like descriptions, are aliased under the primary entity. This strategic decision preserves vital data while eliminating redundancy.
+One of the key feature of the merge capability is its ability to preserve relationships. While merging entities, their interconnected relationships are not lost. Instead, they seamlessly integrate into the new, merged entity. This ensures that the intricate web of relationships within the data remains intact, fostering a comprehensive understanding of the threat landscape.
+OpenCTI’s merge capability helps improve the quality of threat intelligence data. By consolidating entities and centralizing relationships, OpenCTI empowers analysts to focus on insights and strategies, unburdened by data silos or fragmentation. However, exercising caution and foresight in the merging process is essential, ensuring a robust and streamlined knowledge basis.
+In the STIX 2.1 standard, objects can:
+attributes
, by referencing one or multiple IDs.{
+ "type": "intrusion-set",
+ "spec_version": "2.1",
+ "id": "intrusion-set--4e78f46f-a023-4e5f-bc24-71b3ca22ec29",
+ "created_by_ref": "identity--f431f809-377b-45e0-aa1c-6a4751cae5ff", // nested reference to an identity
+ "object_marking_refs": ["marking-definition--34098fce-860f-48ae-8e50-ebd3cc5e41da"], // nested reference to multiple marking defintions
+ "external_references": [
+ {
+ "source_name": "veris",
+ "external_id": "0001AA7F-C601-424A-B2B8-BE6C9F5164E7",
+ "url": "https://github.com/vz-risk/VCDB/blob/125307638178efddd3ecfe2c267ea434667a4eea/data/json/validated/0001AA7F-C601-424A-B2B8-BE6C9F5164E7.json",
+ }
+ ],
+ "created": "2016-04-06T20:03:48.000Z",
+ "modified": "2016-04-06T20:03:48.000Z",
+ "name": "Bobcat Breakin",
+ "description": "Incidents usually feature a shared TTP of a bobcat being released within the building containing network access...",
+ "aliases": ["Zookeeper"],
+ "goals": ["acquisition-theft", "harassment", "damage"]
+}
+
In the previous example, we have 2 nested references to other objects in:
+"created_by_ref": "identity--f431f809-377b-45e0-aa1c-6a4751cae5ff", // nested reference to an identity
+"object_marking_refs": ["marking-definition--34098fce-860f-48ae-8e50-ebd3cc5e41da"], // nested reference to multiple marking defintions
+
But we also have a nested object within the entity (an External Reference
):
"external_references": [
+ {
+ "source_name": "veris",
+ "external_id": "0001AA7F-C601-424A-B2B8-BE6C9F5164E7",
+ "url": "https://github.com/vz-risk/VCDB/blob/125307638178efddd3ecfe2c267ea434667a4eea/data/json/validated/0001AA7F-C601-424A-B2B8-BE6C9F5164E7.json",
+ }
+]
+
In OpenCTI, all nested references and objects are modelized as relationships, to be able to pivot more easily on labels, external references, kill chain phases, marking definitions, etc.
+ +When importing and exporting data to/from OpenCTI, the translation between nested references and objects to full-fledged nodes and edges is automated and therefore transparent for the users. Here is an example with the object in the graph above:
+{
+ "id": "file--b6be3f04-e50f-5220-af3a-86c2ca66b719",
+ "spec_version": "2.1",
+ "x_opencti_description": "...",
+ "x_opencti_score": 50,
+ "hashes": {
+ "MD5": "b502233b34256285140676109dcadde7"
+ },
+ "labels": [
+ "cookiecutter",
+ "clouddata-networks-1"
+ ],
+ "external_references": [
+ {
+ "source_name": "Sekoia.io",
+ "url": "https://app.sekoia.io/intelligence/objects/indicator--3e6d61b4-d5f0-48e0-b934-fdbe0d87ab0c"
+ }
+ ],
+ "x_opencti_id": "8a3d108f-908c-4833-8ff4-4d6fc996ce39",
+ "type": "file",
+ "created_by_ref": "identity--b5b8f9fc-d8bf-5f85-974e-66a7d6f8d4cb",
+ "object_marking_refs": [
+ "marking-definition--613f2e26-407d-48c7-9eca-b8e91df99dc9"
+ ]
+}
+
It is possible to receive notifications
through different notifier connectors (e.g email or directly on the platform interface) triggered by events such as entity creation
, modification
or deletion
.
Each user can create their own triggers. Triggers listen all the events that respect their filters and their event types, and notify the user of those events via the chosen notifier(s).
+A platform administrator can create and manage triggers for a user, who will remain the trigger administrator
, as well as for a group or an organization. Users belonging to this group or organization will then have read-only
access rights on this trigger.
+The user can use filters to ensure that the created triggers are as accurate as possible.
Instance triggers are specific live triggers that listen to one or several instance(s). To create an instance trigger, you can
+An instance trigger on an entity X notifies the following events:
+Note: The notification of an entity deletion can either provides from the real deletion of an entity, either from a modification of the entity that leads to the user loss of visibility for the entity.
+A digest allows triggering the sending of notifications based on multiple triggers
over a given period.
OpenCTI as some built-in notifier connectors that can be used as notifier in for Notification and Activity alerting. +Connectors are registered with a schema describing how the connector will interact. +For example, the webhook connector has the following schema: +- A verb (GET, POST, PUT, ...) +- A URL +- A template +- Some params & headers send through the request
+OpenCTI provides 3 built-in connectors: a webhook connector, a simplified email connector and a platform mailer connector. +By default, OpenCTI also provides 2 sample notifiers to communicate to Teams through a webhook.
+ +The notifiers configured in the admin section can be protected through RBAC and only accessible to specific User/Group/Organization. +Those specified members can use the notifiers directly when configuring their triggers/digest/activity alerts.
+ +The 2 built-in notifiers are still available: Default mailer and User interface
+ + + + + + + + + + + + + + + + + + +You can check the Microsoft website
+The default configuration for a Teams message sent through webhook for a live notification is: +
{
+ "template": {
+ "type": "message",
+ "attachments": [
+ {
+ "contentType": "application/vnd.microsoft.card.thumbnail",
+ "content": {
+ "subtitle": "Operation : <%=content[0].events[0].operation%>",
+ "text": "<%=(new Date(notification.created)).toLocaleString()%>",
+ "title": "<%=content[0].events[0].message%>",
+ "buttons": [
+ {
+ "type": "openUrl",
+ "title": "See in OpenCTI",
+ "value": "https://YOUR_OPENCTI_URL/dashboard/id/<%=content[0].events[0].instance_id%>"
+ }
+ ]
+ }
+ }
+ ]
+ }
+ "url": "https://YOUR_DOMAIN.webhook.office.com/YOUR_ENDPOINT",
+ "verb": "POST"
+}
+
The default configuration for a Teams message sent through webhook for a digest notification is: +
{
+ "template": {
+ "type": "message",
+ "attachments": [
+ {
+ "contentType": "application/vnd.microsoft.card.adaptive",
+ "content": {
+ "$schema": "http://adaptivecards.io/schemas/adaptive-card.json",
+ "type": "AdaptiveCard",
+ "version": "1.0",
+ "body": [
+ {
+ "type": "Container",
+ "items": [
+ {
+ "type": "TextBlock",
+ "text": "<%=notification.name%>",
+ "weight": "bolder",
+ "size": "extraLarge"
+ }, {
+ "type": "TextBlock",
+ "text": "<%=(new Date(notification.created)).toLocaleString()%>",
+ "size": "medium"
+ }
+ ]
+ },
+ <% for(var i=0; i<content.length; i++) { %>
+ {
+ "type": "Container",
+ "items": [<% for(var j=0; j<content[i].events.length; j++) { %>
+ {
+ "type" : "TextBlock",
+ "text" : "[<%=content[i].events[j].message%>](https://YOUR_OPENCTI_URL/dashboard/id/<%=content[i].events[j].instance_id%>)"
+ }<% if(j<(content[i].events.length - 1)) {%>,<% } %>
+ <% } %>]
+ }<% if(i<(content.length - 1)) {%>,<% } %>
+ <% } %>
+ ]
+ }
+ }
+ ],
+ "dataString": <%-JSON.stringify(notification)%>
+ },
+ "url": "https://YOUR_DOMAIN.webhook.office.com/YOUR_ENDPOINT",
+ "verb": "POST"
+}
+
The following chapter aims at giving the reader a step-by-step description of what is available on the platform and the meaning of the different tabs and entries.
+When the user connects to the platform, the home page is the Dashboard
. This Dashboard
contains several visuals summarizing the types and quantity of data recently imported into the platform.
Dashboard
+To get more information about the components of the default dashboard, you can consult the Getting started.
+The left side panel allows the user to navigate through different windows and access different views and categories of knowledge.
+ +The first part of the platform in the left menu is dedicated to what we call the "hot knowledge", which means this is the entities and relationships which are added on a daily basis in the platform and which generally require work / analysis from the users.
+Analyses
: all containers which convey relevant knowledge such as reports, groupings and malware analyses.Cases
: all types of case like incident responses, requests for information, for takedown, etc.Events
: all incidents & alerts coming from operational systems as well as sightings.Observations
: all technical data in the platform such as observables, artifacts and indicators.The second part of the platform in the left menu is dedicated to the "cold knowledge", which means this is the entities and relationships used in the hot knowledge. You can see this as the "encyclopedia" of all pieces of knowledge you need to get context: threats, countries, sectors, etc.
+Threats
: all threats entities from campaigns to threat actors, including intrusion sets.Arsenal
: all tools and pieces of malware used and/or targeted by threats, including vulnerabilities.Techniques
: all objects related to tactics and techniques used by threats (TTPs, etc.).Entities
: all non-geographical contextual information such as sectors, events, organizations, etc.Locations
: all geographical contextual information, from cities to regions, including precise positions.You can customize the experience in the platform by hiding some categories in the left menu, whether globally or for a specific role.
+In the Settings > Parameters
, it is possible for the platform administrator to hide categories in the platform for all users.
In OpenCTI, the different roles are highly customizable. It is possible to defined default dashboards, triggers, etc. but also be able to hide categories in the roles:
+ +Although there are many different entities in OpenCTI and many different tabs, most of them are quite similar and only have minor differences from the other, mostly due to some of their characteristics, which requires specific fields or do not require some fields which are necessary for the other.
+In this part will only be detailed a general outline of a "typical" OpenCTI page. The specifies of the different entities will be detailed in the corresponding pages below (Activities and Knowledge).
+ + +In the Overview
tab on the entity, you will find all properties of the entity as well as the recent activities.
First, you will find the Details
section, where are displayed all properties specific to the type of entity you are looking at, an example below with a piece of malware:
Thus, in the Basic information
section, are displayed all common properties to all objects in OpenCTI, such as the marking definition, the author, the labels (i.e. tags), etc.
Below these two sections, you will find latest modifications in the Knowledge base related to the Entity:
+Latest created relationships
: display the latest relationships that have been created from or to this Entity. For example, latest Indicators of Compromise and associated Threat Actor of a Malware.latest containers about the object
: display all the Cases and Analyses that contains this Entity. For example, the latest Reports about a Malware.External references
: display all the external sources associated with the Entity. You will often find here links to external reports or webpages from where Entity's information came from.History
: display the latest chronological modifications of the Entity and its relationships that occurred in the platform, in order to traceback any alteration.Last, all Notes written by users of the platform about this Entity are displayed in order to access unstructured analysis comments.
+ +In the Knowledge
tab, which is the central part of the entity, you will find all the Knowledge related to the current entity. The Knowledge
tab is different for Analyses (Report
, Groupings
) and Cases (Incident response
, Request for Information
, Request for Takedown
) entities than for all the other entity types.
Knowledge
tab of those entities (who represents Analyses or Cases that can contains a collection of Objects) is the place to integrate and link together entities. For more information on how to integrate information in OpenCTI using the knowledge tab of a report, please refer to the part Manual creation.Knowledge
tabs of any other entity (that does not aim to contain a collection of Objects) gather all the entities which have been at some point linked to the entity the user is looking at. For instance, as shown in the following capture, the Knowledge
tab of Intrusion set APT29, gives access to the list of all entities APT29 is attributed to, all victims the intrusion set has targeted, all its campaigns, TTPs, malware etc. For entities to appear in these tabs under Knowledge
, they need to have been linked to the entity directly or have been computed with the inference engine.The Indicators
and Observables
section offers 3 display modes:
+- The entities view
, which displays the indicators/observables linked to the entity.
+- The relationship view
, which displays the various relationships between the indicators/observables linked to the entity and the entity itself.
+- The contextual view
, which displays the indicators/observables contained in the cases and analyses that contain the entity.
The Content
tab allows for uploading and creating outcomes documents related to the content of the current entity (in PDF, text, HTML or markdown files). This specific tab enable to previzualize, manage and write deliverable associated with the entity. For example an analytic report to share with other teams, a markdown files to feed a collaborative wiki with, etc.
The Content
tab is available for a subset of entities: Report
, Incident
, Incident response
, Request for Information
, and Request for Takedown
.
The Analyses
tab contains the list of all Analyses (Report
, Groupings
) and Cases (Incident response
, Request for Information
, Request for Takedown
) in which the entity has been identified.
By default, this tab display the list, but you can also display the content of all the listed Analyses on a graph, allowing you to explore all their Knowledge and have a glance of the context around the Entity.
+ + +The Data
tab contains documents that are associated to the object and were either:
Analyst Workbench can also be created from here. They will contain the entity by default.
+ +In addition, the Data
tab of Threat actors (group)
, Threat actors (individual)
, Intrusions sets
, Organizations
, Individuals
have an extra panel:
The History
tab display the history of change of the Entity, update of attributes, creation of relations, ...
Because of the volumes of information the history is written in a specific index that consume the redis stream to rebuild the history for the UI. +
+Observables
tab (for Reports and Observed data): A table containing all SCO (Stix Cyber Observable) contained in the Report or the Observed data, with search and filters available. It also displays if the SCO has been added directly or through inferences with the reasoning engineEntities
tab (for Reports and Observed data): A table containing all SDO (Stix Domain Objects) contained in the Report or the Observed data, with search and filters available. It also displays if the SDO has been added directly or through inferences with the reasoning engineSightings
tab (for Indicators and Observables): A table containing all Sightings
relationships corresponding to events in which Indicators
(IP, domain name, url, etc.) are detected by or within an information system, an individual or an organization. Most often, this corresponds to a security event transmitted by a SIEM or EDR.In Opencti, all data can be represented as a large knowledge graph: everything is linked to something. +You can pivot on any entity and on any relationship you have in your platform, using investigations.
+Investigations are available on the top right of the top bar:
+ +Investigations are organized by workspace. When you create a new empty workspace, it will only be visible by you and enables you to work on your investigation before sharing it.
+In your workspace, you can add entities that you want to investigate, visualize the data linked to these entities, add relationships, and export your investigation graph in pdf, image or as new stix report.
+ +You can see next to them a bullet with a number inside. It is a visual indication showing you how many entities are linked to this one and not displayed in the graph yet.
+Note that this number is an approximation of the number of entities. That's why there is a ~
next to the number.
No bullet displayed means there is nothing to expand from this node.
+You can add any existing entity of the platform to your investigation.
+ +Once added, you can select the entity, and see its details in the panel that appears on the right of the screen.
+In the same menu as above, right next to "Add en entity", you can expand the selected entity. Clicking on the menu icon open a new window where you can choose which type of entities and relationships you want to expand.
+For each type of entity or relationship, the number of elements that will be added into the investigation graph is displayed in parentheses. This time there is no ~
symbol as the number is exact.
For example, in the image above, selecting target Malware and relationship Uses means: expand in my investigation graph all Malwares linked to this node with a relationship of type Uses.
+You can add a relationship between entities directly in your investigation.
+ +You can export your investigation in PDF or image format. +You can also download all the content of your investigation graph in a Report stix bundle (investigation is automatically converted).
+ +You can turn your investigation to : +- a grouping +- an incident response +- a report +- a request for information +- a request for takedown
+ +Either, you create a new report or case +
+ +Or, you select an existing entity +
+ +Once you have clicked on the ADD
button, the browser will be redirected to the Knowledge
tab of the Report or Cases you added the content of your investigation. If you added it to multiple reports or cases, you will be redirected to the first of the list.
+
In (Cyber) Threat Intelligence, evaluation of information sources and of information quality is one of the most important aspect of the work. It is of the utter most importance to assess situations by taking into account reliability of the sources and credibility of the information.
+This concept is foundational in OpenCTI, and have real impact on:
+Reliability of a source of information is a measurement of the trust that the analyst can have about the source, based on the technical capabilities or history of the source. Is the source a reliable partner with long sharing history? A competitor? Unknown?
+Reliability of sources are often stated at organizational level, as it requires an overview of the whole history with it.
+In the Intelligence field, Reliability is often notated with the NATO Admiralty code.
+Reliability of a source is important but even a trusted source can be wrong. Information in itself has a credibility, based on what is known about the subject and the level of corroboration by other sources.
+Credibility is often stated at the analyst team level, expert of the subject, able to judge the information with its context.
+In the Intelligence field, Confidence is often notated with the NATO Admiralty code.
+Why Confidence instead of Credibility?
+Using both Reliability and Credibility is an advanced use case for most of CTI teams. It requires a mature organization and a well staffed team. For most of internal CTI team, a simple confidence level is enough to forge assessment, in particular for teams that concentrate on technical CTI.
+Thus in OpenCTI, we have made the choice to fuse the notion of Credibility with the Confidence level that is commonly used by the majority of users. They have now the liberty to push forward their practice and use both Confidence and Reliability in their daily assessments.
+Reliability value can be set for every Entity in the platform that can be Author of Knowledge:
+Organizations
Individuals
Systems
Reports
Reliability on Reports
allows you to specify the reliability associated to the original author of the report if you received it through a provider.
For all Knowledge in the platform, the reliability of the source of the Knowledge (author) is displayed in the Overview. This way, you can always forge your assessment of the provided Knowledge regarding the reliability of the author.
+ +You can also now filter entities by the reliability of its author.
+Tip
+This way, you may choose to feed your work with only Knowledge provided by reliable sources.
+Reliability is an open vocabulary that can be customized in Settings -> Taxonomies -> Vocabularies : reliability_ov.
+Info
+The setting by default is the Reliability scale from NATO Admiralty code. But you can define whatever best fit your organization.
+Confidence level can be set for:
+Report
, Grouping
, Malware analysis
, Notes
Incident Response
, Request for Information
, Request for Takedown
, Feedback
Incident
, Sighting
, Observed data
Indicator
, Infrastructure
Threat actor (Group)
, Threat actor (Individual)
, Intrusion Set
, Campaign
Malware
, Channel
, Tool
, Vulnerability
For all of these entities, the Confidence level is displayed in the Overview, along with the Reliability. This way, you can rapidly assess the Knowledge with the Confidence level representing the credibility/quality of the information.
+Confidence level is a numerical value between 0 and 100. But Multiple "Ticks" can be defined and labelled to provide a meaningful scale.
+Confidence level can be customized for each entity type in Settings > Customization > Entity type.
+ +As such customization can be cumbersome, three confidence level templates are provided in OpenCTI:
+It is always possible to modify an existing template to define a custom scale adapted to your context.
+Tip
+If you use the Admiralty code setting for both reliability and Confidence, you will find yourself with the equivalent of NATO confidence notation in the Overview of your different entities (A1, B2, C3, etc.)
+Your organization have received a report from a CTI provider. At your organization level, this provider is considered as reliable most of the time and its reliability level has been set to "B - Usually Reliable" (your organization uses the Admiralty code).
+This report concerns ransomware threat landscape and have been analysed by your CTI analyst specialized in cybercrime. This analyst has granted a confidence level of "2 - Probably True" to the information.
+As a technical analyst, through the cumulated reliability and Confidence notations, you now know that the technical elements of this report are probably worth consideration.
+ +As a CTI analyst in a governmental CSIRT, you build up Knowledge that will be shared within the platform to beneficiaries. Your CSIRT is considered as a reliable source by your beneficiaries, even if you play a role of a proxy with other sources, but your beneficiaries need some insights about how the Knowledge has been built/gathered.
+For that, you use the "Objective" confidence scale in your platform to provide beneficiaries with that. When the Knowledge is the work of the investigation of your CSIRT, either from incident response or attack infrastructure investigation, you set the confidence level to "Witnessed", "Deduced" or "Induced" (depending on if you observed directly the data, or inferred it during your research). When the information has not been verified by the CSIRT but has value to be shared with beneficiaries, you can use the "Told" level to make it clear to them that the information is probably valuable but has not been verified.
+ + + + + + + + + + + + + + + + + + + +In OpenCTI, you have access to different capabilities to be able to search for knowledge in the platform. In most cases, a search by keyword can be refined with additional filters for instance on the type of object, the author etc.
+The global search is always available in the top bar of the platform.
+ +This search covers all STIX Domain Objects (SDOs) and STIX Cyber Observables (SCOs) in the platform. The search results are sorted according to the following behaviour:
+name
, the aliases
and the description
attributes (full text search).If you get unexpected result, it is always possible to add some filters after the initial search:
+ +Also, using the Advanced search
button, it is possible to directly put filters in a global search:
The bulk search capabilities in available in the top bar of the platform and allow you to copy paste a list of keyword or objects (ie. list of domains, list of IP addresses, list of vulnerabilities, etc.) to search in the platform:
+ +When searching in bulk, OpenCTI is only looking for an exact match in some properties:
+name
aliases
x_opencti_aliases
x_mitre_id
value
subject
abstract
hashes_MD5
hashes_SHA1
hashes_SHA256
hashes_SHA512
x_opencti_additional_names
When something is not found, it appears in the list as Unknown
and will be excluded if you choose to export your search result in a JSON STIX bundle or in a CSV file.
In most of the screens of knowledge, you always have a contextual search bar allowing you to filter the list you are on:
+ +The search keyword used here is taken into account if you decide to export the current view in a file such as a JSON STIX bundle or a CSV file.
+Some other screens can contain search bars for specific purposes. For instance, in the graph views to filter the nodes displayed on the graph:
+ + + + + + + + + + + + + + + + + + + +Workbenches serve as dedicated workspaces for manipulating data before it is officially imported into the platform.
+The workbenches are located at various places within the platform:
+This window encompasses all the necessary tools for importing a file. Files imported through this interface will subsequently be processed by the import connectors, resulting in the creation of workbenches. Additionally, analysts can manually create a workbench by clicking on the "+" icon at the bottom right of the window.
+ +Workbenches are also accessible through the "Data" tabs of entities, providing convenient access to import data associated with the entity.
+ +Workbenches are automatically generated upon the import of a file through an import connector. When an import connector is initiated, it scans files for recognizable entities and subsequently creates a workbench. All identified entities are placed within this workbench for analyst reviews. +Alternatively, analysts have the option to manually create a workbench by clicking on the "+" icon at the bottom right of the "Data import and analyst workbenches" window.
+ +The workbench being a draft space, the analysts use it to review connector proposals before finalizing them for import. Within the workbench, analysts have the flexibility to add, delete, or modify entities to meet specific requirements.
+ +Once the content within the workbench is deemed acceptable, the analyst must initiate the ingestion process by clicking on Validate this workbench
. This action signifies writing the data in the knowledge base.
Workbenches are drafting spaces
+Until the workbench is validated, the contained data remains in draft form and is not recorded in the knowledge base. This ensures that only reviewed and approved data is officially integrated into the platform.
+For more information on importing files, refer to the Import from files documentation page.
+ + + + + + + + + + + + + + + + + + +Under construction
+We are doing our best to complete this page. +If you want to participate, don't hesitate to join the Filigran Community on Slack +or submit your pull request on the Github doc repository.
+