/dev/posts/

Malicious authorization server attack in UMA 2.0

Published:

Updated:

In a previous post, I described a pass-the-permission-ticket vulnerability in UMA 2.0 in which a malicious UMA resource server could kindly ask a UMA client to give it access tokens actually intended for another UMA resource server. In this post, I am describing a similar attack when the authorization server is malicious.

This attack is actually a instance of a generic (documented) problem about phishing by malicious authorization servers in OAuth. As far as I understand, the only mitigations are the same kind of mitigations which can be used on other phishing attempts. This should probably be covered in a “security considerations” section or a “UMA security best practices document” but probably does not require modifying or extending the UMA protocol. Clear consent validation user interfaces may help the user detecting the attack. The attack may be difficult to detect for the average user however.

See as well:

Table of content

Description

Case 1, malicious resource server and authorization server

Summary: a malicious server acting as both a resource server (RS1) and an authorization server (AS1) may trick the requesting-party into giving a RPT for another resource server RS2.

This works as follows:

  1. the malicious resource server RS1 is registered on the target authorization server AS2 as a UMA client;
  2. the client application is tricked into making a request to the malicious resource server RS1;
  3. RS1 makes a request on the target resource server RS2 and obtains a permission ticket PT-XXXX;
  4. RS1 redirects the client to a fake authorization server AS1 under its control;
  5. the client application registers itself on the fake authorization server AS1;
  6. the client starts an interactive claims-gathering on the fake authorization server AS1 i.e., redirects the user to AS1 (eg. https://as1/claims_gathering?...);
  7. AS1 redirects the requesting-party's user-agent to AS2 (eg. https://as2/claims_gathering?...);
  8. the requesting-party consents thinking he is granting a request for accessing resources on RS1 intended for its client whereas he is granting a request for accessing resources on RS2 intended for RS1/AS1[1];
  9. the real authorization server AS2 redirects the user the malicious authorization server AS1 with a new permission ticket;
  10. the malicious authorization server AS1 exchanges this ticket for the RPT;
  11. RS1/AS1 can now make requests on RS2 on the RqP's behalf.
Sequence diagram of malicious resource server acting as an authorization server

Requirements:

Case 2, malicious authorization server

The same attack works as well if the original resource server RS1 is not malicious but is connected to a malicious (or compromised) authorization server AS1.

Sequence diagram of malicious authorization server

Am I impacted?

You might be impacted if the following are true:

The problem of malicious authorization servers is mentioned in the general context of OAuth in the resource metadata draft:

Implementers need to be aware that if an inappropriate authorization server is used by the client, that an attacker may be able to act as a man-in-the-middle proxy to a valid authorization server without it being detected by the authorization server or the client.

Mitigations

Warning: lack of effect of sender-constrained access tokens

DPoP is not able to protect against these attacks! This is true even if DPoP is required by the client, by the target authorization server AS2 and by the target resource server RS2. This is because DPoP happens between the malicious server RS1/AS1, the target authorization server AS2 and the target resource server RS2.

Certificate-bound access tokens (mTLS) cannot protect against these attacks either.

See the phishing section of the OAuth 2.0 Protected Resource Metadata draft for related recommendations for similar attacks in OAuth:

This specification may be deployed in a scenario where the desired HTTP resource is identified by a user-selected URL. If this resource is malicious or compromised, it could mislead the user into revealing their account credentials or authorizing unwanted access to OAuth-controlled capabilities. This risk is reduced, but not eliminated, by following best practices for OAuth user interfaces, such as providing clear notice to the user, displaying the authorization server's domain name, supporting origin-bound phishing-resistant authenticators, supporting the use of password managers, and applying heuristic checks such as domain reputation.

Possible mitigations for the malicious authorization server include:

Mitigations in the client

The following mitigations could be implemented by the client:

Mitigations in the authorization server

The following mitigations could be implemented by the authorization server:

Warning: spoofed client and resource server information

The information (name, URLs, etc.) of the clients and resource servers can be spoofed if this information is coming from untrusted source. If dynamic client registration is used, a malicious client or a malicious resource server can declare URLs for domains they do not even own.

As a consequence, checking the reputation of the client and resource server applications based on domain name is probably not useful for untrusted servers.

Mitigations involving client and authorization server

The following mitigations depends on the behavior of both the client and the authorization serve:

Impact for the HEART Profile for UMA

For claims gathering, the HEART profile for UMA 2.0 requires support for pushed-claims as ID tokens and for interactive claims gathering using OpenID Connect login:

The authorization server MUST support claims being presented in at least two methods:

  • by redirecting the requesting party to a web page where they can log in to the authorization server using OpenID Connect
  • directly by the client in the form of an OpenID Connect ID Token.

[...] Since the audience of an ID token is the client's identifier with the IdP, and this client identifier is known only to the client and the IdP, this restriction effectively means that ID tokens can only be presented at the RPT endpoint in the special case when the authorization server is also the IdP or there is another closely bound relationship between the AS and IdP.

In the first case (OpenID Connect login), this attack might be possible.

In the second case (pushed ID token), this attack should not work. The client is expected to only push the ID token in the case if the authorization server is the IdP. As a consequence, the client should not send ID tokens issued by the legitimate authorization server to the malicious authorization server. Instead, the client application might only send an ID token issued by the malicious authorization server to the malicious authorization server.

Conclusion

This problem has already been identified in the general context of OAuth. However, in an open loosely-coupled system environment by federated UMA (allowing dynamic relationships between clients, resource servers and authorization servers), this issue might be especially problematic. This type of loosely-coupled deployment scenario lends itself to such fishing attacks.

References

UMA:

HEART:

OAuth:

OAuth security considerations:


  1. At this point, the user may notice that the client application name or URI is wrong. ↩︎

  2. This is not great from an interoperability point-of-view. Without a profile defining how pushed claims should be used, they are not reallly interoperable. The only interoperable method is interactive claims-gathering. See the discussion about the usage if ID token as pushed claims in HEART profile for UMA for a example of profile defining a possible usage of pushed claims. ↩︎