Omron-logo

Omron

Omron, the next stage in cryptographically verified efficient and intelligent networks. By leveraging incentive design, Omron optimizes the generation of zero knowledge and verified compute across verticals such as zkML, zkRollups, training, model distillation and much more.

Infrastructure
Zero-Knowledge Proofs
Maximum Bounty
$100,000
Live Since
30 September 2024
Last Updated
01 October 2024
  • PoC required

  • KYC required

Rewards

Rewards by Threat Level

Smart Contract
Critical
USD $50,000 - USD $100,000
High
USD $25,000 - USD $50,000
Medium
USD $10,000
Low
USD $1,000
Websites and Applications
Critical
USD $11,000 - USD $15,000
High
USD $10,000
Medium
USD $3,000
Low
USD $1,000

Rewards are distributed according to the impact of the vulnerability based on the Immunefi Vulnerability Severity Classification System V2.3.

Reward Calculation for Critical Level Reports

For critical smart contract bugs, the reward amount is 10% of the funds directly affected up to a maximum of USD 100 000. The calculation of the amount of funds at risk is based on the time and date the bug report is submitted. However, a minimum reward of USD 50 000 is to be rewarded in order to incentivize security researchers against withholding a critical bug report.

Repeatable Attack Limitations

  • If the smart contract where the vulnerability exists can be upgraded or paused, only the initial attack will be considered for a reward. This is because the Project can mitigate the risk of further exploitation by upgrading or pausing the component where the vulnerability exists. The reward amount will depend on the severity of the impact and the funds at risk.

  • For critical repeatable attacks on smart contracts that cannot be upgraded or paused, the Project will consider the cumulative impact of the repeatable attacks for a reward. This is because the Project cannot prevent the attacker from repeatedly exploiting the vulnerability until all funds are drained and/or other irreversible damage is done. Therefore, this warrants a reward equivalent to 10% of funds at risk, capped at the maximum critical reward.

Reward Calculation for High Level Reports

  • High vulnerabilities concerning theft/permanent freezing of unclaimed yield/royalties are rewarded within a range of USD 25 000 to USD 50 000 depending on the funds at risk, capped at the maximum high reward.

  • In the event of temporary freezing, the reward doubles from the full frozen value for every additional [24h] that the funds are temporarily frozen, up until a maximum cap of the high reward. This is because as the duration of the freezing lengthens, the potential for greater damage and subsequent reputational harm intensifies. Thus, by increasing the reward proportionally with the frozen duration, the Project ensures stronger incentives for bug disclosure of this nature.

For critical web/apps bug reports will be rewarded with USD 15 000, only if the impact leads to:

  • A loss of funds involving an attack that does not require any user action
  • Private key or private key generation leakage leading to unauthorized access to user funds

All other impacts that would be classified as Critical would be rewarded a flat amount of USD 11 000. The rest of the severity levels are paid out according to the Impact in Scope table.

Reward Payment Terms

The Inference Labs team shall handle payment of the rewards. Rewards will be denominated in USD, but will be paid in USDC or USDT on Ethereum. The Inference Labs team shall have full discretion over the payment date and payment method in respect of the rewards.

Program Overview

Omron, the next stage in cryptographically verified efficient and intelligent networks. By leveraging incentive design, Omron optimizes the generation of zero knowledge and verified compute across verticals such as zkML, zkRollups, training, model distillation and much more.

Omron serves as a dynamic hub, connecting users to specialized services within Bittensor subnets. Enabling access to diverse offerings like data, computational tasks, 
and complex analyses, while Omron ensures the origins of AI outputs are cryptographically verified through zero-knowledge proofs with Proof-of-Inference.

Omron’s computational integrity is reinforced through the use of a Proof-of-Inference mechanism. A protocol which leverages Zero-Knowledge Proofs to verify proper computation of AI models and their resulting inference. Providing not only peace of mind and protection from malicious or malfunctioning AI, but also providing the ability for external networks, smart contracts, and AI Agents to automatically accept output results at face value due to this improved security, adding a new level of efficiency.

For more information about Omron, please visit https://omron.ai/

Inference Labs provides rewards in USDC or USDT on Ethereum, denominated in USD. Inference Labs shall have full access and control over its Immunefi vault (the "Vault"), in which Inference Labs will deposit USDC and/or USDT to pay rewards, and the assets in the Vault (the “Assets”). For the avoidance of doubt, Inference Labs shall have full discretion to withdraw the Assets from the Vault at any time.

For more details about the payment process, please view the Rewards by Threat Level section.

KYC Requirement

Inference Labs will be requesting KYC information in order to pay for successful bug submissions. The following information will be required:

  • Full name
  • Date of birth
  • Proof of address (either a redacted bank statement with address or a recent utility bill) within the past three (3) months)
  • Copy of Passport or other Government issued identification document
  • Eligibility Criteria

Security researchers who wish to participate in the Program must adhere to the rules of engagement set forth in the Terms and cannot be:

  • On the Specially Designated Nationals and Blocked Persons list of the U.S. Office of Foreign Assets Control
  • An official contributor of the Project, both past or present
  • Employees and/or individuals closely associated with the project
  • Security auditors that directly or indirectly participated in the audit review of the project

Responsible Publication

Inference Labs adheres to Publication Category 3 - Approval Required. This Policy category determines what information researchers are allowed to make public from their submitted bug reports. For more information about the category selected, please refer to our Responsible Publication page.

Primacy of Impact vs Primacy of Rules

Inference Labs adheres to the Primacy of Rules, which means that the whole bug bounty program is run strictly under the terms and conditions stated within this page.

Proof of Concept (PoC) Requirements

A PoC, demonstrating the bug's impact, is required for this program and has to comply with the Immunefi PoC Guidelines and Rules.

Known Issue Assurance

Inference Labs commits to providing Known Issue Assurance to bug submissions through their program. This means that Inference Labs will either disclose known issues publicly, or at the very least, privately via a self-reported bug submission.

In a potential scenario of a mediation, this allows for a more objective and streamlined process, in order to prove that an issue is known. Otherwise, assuming the bug report is valid, it would result in the report being considered as in-scope, and due a reward.

Previous Audits

Inference Labs’ completed audit reports can be found at https://github.com/inference-labs-inc/omron-contracts/blob/main/audits/Zellic.pdf and https://github.com/inference-labs-inc/omron-contracts/blob/main/audits/Testmachine.pdf. Any unfixed vulnerabilities mentioned in these reports are not eligible for a reward.

Feasibility Limitations

The Project may be receiving reports that are valid (the bug and attack vector are real) and cite assets and impacts that are in scope, but there may be obstacles or barriers to executing the attack in the real world. In other words, there is a question about how feasible the attack really is. Conversely, there may also be mitigation measures that projects can take to prevent the impact of the bug, which are not feasible or would require unconventional action and hence, should not be used as reasons for downgrading a bug's severity.

Therefore, Immunefi has developed a set of feasibility limitation standards which by default states what security researchers, as well as projects, can or cannot cite when reviewing a bug report.

Immunefi Standard Badge

By adhering to Immunefi’s best practice recommendations, Inference Labs has satisfied the requirements for the Immunefi Standard Badge.

KYC required

The submission of KYC information is a requirement for payout processing.

Proof of Concept

Proof of concept is always required for all severities.

Responsible Publication

Category 3: Approval Required

Prohibited Activities

Default prohibited activities
  • Any testing on mainnet or public testnet deployed code; all testing should be done on local-forks of either public testnet or mainnet
  • Any testing with pricing oracles or third-party smart contracts
  • Attempting phishing or other social engineering attacks against our employees and/or customers
  • Any testing with third-party systems and applications (e.g. browser extensions) as well as websites (e.g. SSO providers, advertising networks)
  • Any denial of service attacks that are executed against project assets
  • Automated testing of services that generates significant amounts of traffic
  • Public disclosure of an unpatched vulnerability in an embargoed bounty
  • Any other actions prohibited by the Immunefi Rules

Feasibility Limitations

The project may be receiving reports that are valid (the bug and attack vector are real) and cite assets and impacts that are in scope, but there may be obstacles or barriers to executing the attack in the real world. In other words, there is a question about how feasible the attack really is. Conversely, there may also be mitigation measures that projects can take to prevent the impact of the bug, which are not feasible or would require unconventional action and hence, should not be used as reasons for downgrading a bug's severity. Therefore, Immunefi has developed a set of feasibility limitation standards which by default states what security researchers, as well as projects, can or cannot cite when reviewing a bug report.