Quantcast
Channel: Open Source Security
Viewing all 249 articles
Browse latest View live

The Apache Sentry security service - part IV

$
0
0
This is the fourth in a series of blog posts on the Apache Sentry security service. The first post looked at how to get started with the Apache Sentry security service, both from scratch and via a docker image. The second post looked at how to define the authorization privileges held in the Sentry security service. The third post looked at securing Apache Kafka withe Apache Sentry, where the privileges were defined in the Sentry security service. In this post, we will update an earlier tutorial I wrote on securing Apache Hive using Apache Sentry to also retrieve the privileges from the Sentry security service.

1) Configure authorization in Apache Hive

Please follow this tutorial to install and configure Apache Hadoop and Apache Hive, except use version 2.3.2 of Apache Hive, which is the version supported by Apache Sentry 2.0.0. After installation, follow the instructions to create a table in Hive and make sure that a query is successful. Now we will integrate Apache Sentry 2.0.0 with Apache Hive. First copy the jars from the "lib" directory of the Sentry distribution to the Hive "lib" directory. We need to add three new configuration files to the "conf" directory of Apache Hive.

Create a file called 'conf/hiveserver2-site.xml' with the content:

Here we are enabling authorization and adding the Sentry authorization plugin. Note that it differs a bit from the hiveserver2-site.xml given in the previous tutorial, namely that we are not using the "v2" Sentry Hive binding as before.

Next create a new file in the "conf" directory of Apache Hive called "sentry-site.xml" with the following content:


This is the configuration file for the Sentry plugin for Hive. It instructs Sentry to retrieve the authorization privileges from the Sentry security service, and to get the groups of authenticated users from the 'sentry.ini' configuration file. As we are not using Kerberos, the "testing.mode" configuration parameter must be set to "true". Finally, we need to define the groups associated with a given user in 'sentry.ini' in the conf directory:

Here we assign "alice" the group "user". Note that in the earlier tutorial this file also contained the authorization privileges, but they are not required in this scenario as we are using the Apache Sentry security service.

2) Configure the Apache Sentry security service

Follow the first tutorial to install the Apache Sentry security service. Now we need to create the authorization privileges for our Apache Hive test scenario as per the second tutorial. Start the 'sentryCli" in the Apache Sentry distribution, and assign a role to the "user" group (of which "alice" is a member) with the privilege to perform a "select" statement on the "words" table:
  • cr select_role
  • gp select_role "Server=server1->Db=default->Table=words->Column=*->action=select"
  • gr select_role user
Now we can test authorization after restarting Apache Hive. The user 'alice' should now be able query the table according to our policy:
  • bin/beeline -u jdbc:hive2://localhost:10000 -n alice
  • select * from words where word == 'Dare'; (works)

Running the Apache Ranger Admin service 1.0.0 in Docker

$
0
0
Apache Ranger 1.0.0 has been recently released after a long development cycle, featuring a huge number of improvements and bug fixes. A previous blog post covered how to manually install the Apache Ranger admin service, by compiling the Apache Ranger source and using MySQL as the database. However this involves a large number of steps, as well as installing MySQL, Apache Maven, Java, etc. In this post we will show how Docker Compose can be used to easily set up the Apache Ranger 1.0.0 Admin Service.

1) Description

The project is available in my github testcases repository here. This project is provided as a quick and easy way to play around with the Apache Ranger admin service. It should not be deployed in production as it uses default security credentials, it is not secure with kerberos, auditing is not enabled, etc. It contains the configuration required to build two Docker images:
  • ranger-postgres: Contains a Docker File to set up a Postgres database for Apache Ranger, creating the necessary users for the Ranger admin installation scripts to work.
  • ranger-admin: Contains a Docker File to build, configure and install the Apache Ranger admin service. It downloads the Apache Ranger source code, and builds and extracts the Admin service. It configures it to use the postgres database and starts the Admin service when the docker image is started.
2) Building and running

First we need to build the docker images. This can be done via:
  • (In ranger-postgres) docker build . -t coheigea/ranger-postgres
  • (In ranger-admin) docker build . -t coheigea/ranger-admin
Note that the ranger-admin docker images takes a long time to build due to having to build the source code using Apache Maven - and hence it needs to download a large amount of dependencies.

There are two ways of running the project. The easiest is to install Docker compose and then simply start it with:
  •  docker-compose up
The alternative is to create a network so that we can link containers, and then run the images separately using docker, i.e.:
  • docker network create my-network
  • docker run -p 5432:5432 --name postgres-server --network my-network coheigea/ranger-postgres
  • docker run -p 6080:6080 -it --network my-network coheigea/ranger-admin
Once the Ranger admin server is started then open a browser and navigate to:
  • http://localhost:6080 (credentials: admin/admin)
To see how to create authorization policies for various big data components using the UI please refer to the numerous blog posts I have previously written on this topic (for example: Kafka, HBase, HDFS).

Streaming WS-Security MTOM support in Apache CXF

$
0
0
Apache CXF 3.0.0 introduced the new streaming (StAX-based) WS-security implementation via new functionality available in the core libraries - Apache WSS4J 2.0.0 and Apache Santuario 2.0.0. The StAX-based approach is more limited than the older DOM-based alternative, and in general slightly slower. However it can come into its own when sending or processing very large documents due to the low memory footprint of the library.

In addition, support was added in Apache CXF 3.2.0 for the DOM library to send and process WS-Security messages using MTOM. Essentially what this means is that we can compress WS-Security secured SOAP messages, by storing binary content in message attachments, instead of inlining them in the message (via BASE-64 encoding). When MTOM is enabled, Apache CXF will automatically use this functionality for WS-Security. However up until now, this functionality has not been available for the streaming WS-Security library.

This is set to change in Apache CXF 3.2.5. Support has been added in Apache Santuario to process CipherValue message elements in the streaming XML Security code that contain a "xop:Include" reference to a message attachment. Some fixes in Apache WSS4J build on this support, also adding support for processing BinarySecurityToken Elements that include 'xop:Include' instead of the inlined bytes. Both of these sets of changes are supported in Apache CXF in the following JIRA.

What this means is that Apache CXF 3.2.5 onwards will be able to process WS-Security enabled SOAP messages over MTOM. Please note however that support is limited to processing messages. The streaming code still inlines message bytes on the outbound side, unlike the DOM implementation. This could perhaps be implemented in the future if there is sufficient demand.

Securing web services using Talend's Open Studio for ESB - part I

$
0
0
This is the first part in a series of posts on securing web services using Talend's Open Studio for ESB. Talend's Open Studio for ESB is a freely available tool that comprises an eclipse-based studio to design and test web services, as well as a runtime based on Apache Karaf which can be used to deploy the web services built in the studio. In this post we will show how a simple SOAP web service can be designed and tested in the Studio.

1) Download and start the Studio and design a simple SOAP web service

Firstly download and extract Talend's Open Studio for ESB. Version 7.0.1 was used for the purpose of this blog post. The "Runtime_ESBSE" directory contains the runtime container, and the "Studio" directory the Open Studio for ESB. Launch the Studio and create a new project.

First we will create a simple SOAP "double-it" web service using the Studio. Right click on "Services" in the left-hand menu and then "Create Service". Call the service "DoubleIt" and click "Next" to create a new WSDL. The Studio then displays the design of the service, and the WSDL can be seen if you click on the "Source" tab. We will make a few changes to the default service.

Click on "DoubleItPortType" and change the operation name from "DoubleItOperation" to "DoubleIt". Next click on the right arrow next to "DoubleItRequest" and change the request type from "string" to "int". Do the same for the Response type. Now save the service and we are ready to move on to the next step.




2) Implement the "DoubleIt" service we have designed

After having designed the service above, now we need to implement the service by assigning a job to it. Right click on the "DoubleIt" service we have created ("Services/DoubleIt 0.1/DoubleItPortType 0.1/DoubleIt 0.1" in the left-hand menu) and select "Assign Job", and click "Next" to create a new job for this service. Now under "Job Designs" in the left-hand menu we see a new job has been created and the main window has been updated with tESBProviderRequest and tESBProviderResponse components.

Drag the tESBProviderResponse component over to the right hand side of the window. Now we need to think about how to handle the service logic. In our service, we want to take an input number, double it, and assign it to an output number. A Talend component available in the Studio that allows us to map XML is the "tXMLMap" component.

Find the "tXMLMap" component in the palette on the right hand side (under "XML"), and drag it onto the main window in between the two existing components. Now right click on tESBProviderRequest and select "Row" and "Main", and map the arrow onto the tXMLMap component. Do the same from tXMLMap to tESBProviderResponse, giving a name "Response" for the output when prompted.


Next we need to configure the "tXMLMap" component to implement the mapping logic. Double click on "tXMLMap". The input request is available on the left hand side, with a "root" payload. We want to map the request payload to the response payload. Hold the left mouse button down on the left-hand side over the request "root" payload, and move the mouse over to the right hand side and release on the response "root" payload, selecting "Add linker to target node".

Before we try to implement the "doubling" logic, we need to change the payload type from "String" to "int". Click on the "Tree Schema Editor" tab at the bottom of the screen and change both the request and response payload types to "int". Back up on the Response tab, click on "[row1.payload:/root]" and edit it to be "2 * [row1.payload:/root]".

Finally, we need to change the request/response elements to conform to the WSDL. Right-click on "root" in the left-hand request column and rename it to "ns2:DoubleItRequest". When this is done, right click again on this element and "Set a Namespace", with the namespace "http://www.talend.org/service/" and prefix "ns2". Similarly, on the right hand side, rename "root" to "ns2:DoubleItResponse", and set the namespace in the same way as for the request. Now click "Apply" and save the job.

Now click on the "Run" tab at the bottom of the screen and run the service. The service should now be deployed at "http://localhost:8090/services/DoubleIt".

3) Implement a client for the "DoubleIt" service

As well as designing, creating and testing web services, Talend Open Studio for ESB can also be used to create clients for these web services. To do this we need a new job. Right click on "Job Designs" in the left-hand menu and create a new job. Drag the "tESBConsumer" component to the main screen, as well as two "tLogRow" components. Right click on "tESBConsumer" and select "Row/Response" and drag the arrow to the first tLogRow component. In addition, select "Row/Fault" and drag the arrow to the second tLogRow component. This way we are logging both the response + faults from the remote service.

Left-click on "tESBConsumer" and specify "http://localhost:8090/services/DoubleIt?wsdl" as the WSDL location (the WSDL of our service is available at this address due to WSDL publish). Then click the "reload" button on the right-hand side and click "Finish". Next we need to implement the client logic - namely, to supply a number to double. Drag the "tFixedFlowInput" and "tXMLMap" components to the screen. Map "Row/Main" from "tFixedFlowInput" to "tXMLMap", and "Row" from "tXMLMap" to "tESBConsumer" with a new output name of "Request".

Now click on "tFixedFlowInput" and "Edit Schema". Add a new column of type "int" called "numberToDouble". Back in the component screen for "tFixedFlowInput" select "Use inline table" and enter a number (e.g. 200). Now click on "tXMLMap" to configure our mapping. Drag "numberToDouble" on the left-hand side over to "root" on the right-hand side, selecting "Add linker to target node". Right click on "root" on the right-hand side, and rename it to "ns2:DoubleItRequest", and again "Set a Namespace" with namespace "http://www.talend.org/service/" and prefix "ns2".

Click OK and save the job, and run it via the "Run" tab. In the console window we should see the response from the service, informing us that 200 doubled is "400". The job is also updated so that you can see the flow along with the throughput:

In the next tutorial we'll look at how to deploy our service and client jobs in the runtime container.

SAML SSO support for the Apache CXF Fediz plugins

$
0
0
Apache CXF Fediz originated as a way of securing web applications using Single Sign-On via the WS-Federation Passive Requestor Profile. Plugins were written to support the most popular web application containers, such as Apache Tomcat, Jetty, Spring, Websphere, etc. Fediz then shipped an IdP which could be used to perform authentication using the container plugins. For the 1.3.0 release, support was added to the IdP to also support the SAML SSO protocol. From the next 1.4.4 release, the Tomcat 8 plugin can authenticate to a SAML SSO IdP, instead of using WS-Federation, with a few simple configuration changes. This makes it very easy to upgrade your Fediz-secured containers to use SAML SSO instead of WS-Federation.

In this article, we will secure the 'fedizhelloworld' application example that ships with Fediz, which is deployed in Apache Tomcat, using Keycloak as the SAML SSO IdP. We will show how to deploy and secure the application both manually and also by a docker image that I have created for quick deployment.

1) Download and configure Keycloak

Download and install the latest Keycloak distribution (tested with 3.4.3).

1.1) Create users in Keycloak


Start keycloak in standalone mode by running 'sh bin/standalone.sh'. First we need to create an admin user by navigating to the following URL, and entering a password:
  • http://localhost:8080/auth/
    Click on the "Administration Console" link, logging on using the admin user credentials. You will see the configuration details of the "Master" realm. For the purposes of this demo, we will create a new realm. Hover the mouse pointer over "Master" in the top left-hand corner, and click on "Add realm". Create a new realm called "fediz-samlsso". Now we will create a new user in this realm. Click on "Users" and select "Add User", specifying "alice" as the username. Click "save" and then go to the "Credentials" tab for "alice", and specify a password, unselecting the "Temporary" checkbox, and reset the password.

    1.2) Create a new client application in Keycloak

    Now we will create a new client application for 'fedizhelloworld' in Keycloak. Select "Clients" in the left-hand menu, and click on "Create". Specify the following values:
    • Client ID: urn:org:apache:cxf:fediz:fedizhelloworld
    • Client protocol: saml
    • Client SAML Endpoint: https://localhost:9443/fedizhelloworld/secure
    Once the client is created you will see more configuration options:
    • Select "Sign Assertions"
    • Select "Force Name ID Format".
    • Valid Redirect URIs: https://localhost:9443/*
    Click 'Save'. Now go to the "SAML Keys" tab of the newly created client. Here we will have to import the certificate of the Fediz RP so that Keycloak can validate the signed SAML requests. Click "Import" and specify:
    • Archive Format: JKS
    • Key Alias: mytomrpkey
    • Store password: tompass
    • Import file: rp-ssl-key.jks
    1.3) Export the Keycloak signing certificate

    Finally, we need to export the Keycloak signing certificate so that the Fediz plugin can validate the signed SAML Response from Keycloak. Select "Realm Settings" (for "fediz-samlsso") and click on the "Keys" tab. Copy and save the value specified in the "Certificate" textfield. 

    2) Manually configure 'fedizhelloworld' application in Apache Tomcat

    In this section, we'll look at manually configuring the 'fedizhelloworld' application in Apache Tomcat. To use a docker image skip to the next section. Download and extract Apache Tomcat 8 (tested with 8.5.32) to ${catalina.home}. Download Fediz 1.4.4 and build the source with "mvn clean install -DskipTests". Copy 'apache-fediz/target/apache-fediz-1.4.4' to a new directory (${fediz.home}).

    2.1) Secure the Apache Tomcat container with the Fediz plugin

    First we will secure the Apache Tomcat container with the Fediz plugin:
    • Create a new directory: ${catalina.home}/lib/fediz
    • Edit ${catalina.home}/conf/catalina.properties and append ',${catalina.home}/lib/fediz/*.jar' to the 'common.loader' property.
    • Copy ${fediz.home}/plugins/tomcat8/lib/* to ${catalina.home}/lib/fediz
    • Edit the TLS Connector in ${catalina.home}/conf/server.xml', and change the ports to avoid conflict with Keycloak, i.e. switch 8080 to 9080, 8443 to 9443, etc.
    • In the same file, add configuration for the TLS port: <Connector port="9443" protocol="org.apache.coyote.http11.Http11NioProtocol" maxThreads="150" SSLEnabled="true" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" keystoreFile="rp-ssl-key.jks" keystorePass="tompass" />
    2.2) Deploy 'fedizhelloworld' to Tomcat
    • Do a "mvn clean install" in ${fediz.home}/examples/simpleWebapp
    • Copy ${fediz.home}/examples/simpleWebapp/target/fedizhelloworld.war to ${catalina.home}/webapps.
    • Copy ${fediz.home}/examples/samplekeys/rp-ssl-key.jks to ${catalina.home}.
    • Copy ${fediz.home}/examples/simpleWebapp/src/main/config/fediz_config.xml to ${catalina.home}/conf/
    2.3) Configure 'fediz_config.xml'
     
    Now we need to configure '${catalina.home}/conf/fediz_config.xml' that we copied from the Fediz example in the previous section:
    • Under 'contextConfig', specify the key we are using to sign the SAML Request: <signingKey keyAlias="mytomrpkey" keyPassword="tompass">
                  <keyStore file="rp-ssl-key.jks" password="tompass" type="JKS" />
              </signingKey>
    • Change the trustManager keystore to: <keyStore file="keycloak.cert" type="PEM" />
    • In the 'Protocol' section, change 'federationProtocolType' to 'samlProtocolType'.
    • Change the 'Issuer' value to: http://localhost:8080/auth/realms/fediz-samlsso/protocol/saml
    • Under 'Protocol' add: <signRequest>true</signRequest and <disableDeflateEncoding>true</disableDeflateEncoding>
    Finally, create a file called 'keycloak.cert' in ${catalina.home}. In between "-----BEGIN CERTIFICATE----- / -----END CERTIFICATE-----" tags, paste the Keycloak signing certificate as retrieved in step "1.3" above. 

    3) Using a docker image to deploy 'fedizhelloworld'

    I have created a simple docker project which can be used to package and deploy the 'fedizhelloworld.war' into Apache Tomcat, which is secured with Fediz. The project is available on github here. Clone the project and build and run with the following steps:
    • Edit 'keycloak.cert' and paste in the signing certificate as retrieved in step 1.3 above.
    • Build with: docker build -t coheigea/fediz-samlsso-rp .
    • Run with: docker run -p 9443:8443 coheigea/fediz-samlsso-rp
    At the time of writing, the docker file references a SNAPSHOT build of Fediz, as 1.4.4 is not yet released.

    4) Testing the service

    To test the service navigate to:
    • https://localhost:9443/fedizhelloworld/secure/fedservlet
    You should be redirected to the Keycloak authentication page. Enter the user credentials you have created, and you will be redirected back to the 'fedizhelloworld' application successfully.


    Securing web services using Talend's Open Studio for ESB - part II

    $
    0
    0
    This is the second article in a series on securing web services using Talend's Open Studio for ESB. In the first article, we looked at how Talend's Open Studio for ESB can be used to design and test a SOAP web service, and also how we can create a client job that invokes on this service. In this article, we will show how to deploy the service and client we created previously in the Talend ESB runtime container.

    1) The Talend ESB runtime container

    When we downloaded Talend Open Studio for ESB (see the first article), we launched the Talend Studio via the "Studio" directory to design and test our "double it" SOAP service. However, the ability to "Run" the SOAP Service in the Studio is only suitable for testing the design of the service. Once we are ready to deploy a service or client we have created in the Studio, we will need a suitable runtime container, something that is available in the "Runtime_ESBSE" directory in the Talend Open Studio for ESB distribution. The runtime container in question is a powerful and enterprise-ready container based on Apache Karaf. We can start it in the "Runtime_ESBSE/container" directory via "bin/trun":

    By default, the Talend ESB runtime starts with a set of default "bundles" (which can be viewed with "la"). All of the libraries that we require will be started automatically, so no further work is required here.

    2) Export the service and client job from the Studio

    To deploy the SOAP "double it" service, and client job, we need to export them from the Studio. Right click on the "Double It" service in the left-hand menu, and first select "ESB Runtime Options", ticking "Log Messages" so that we can see the input/output messages of the service when we look at the logs. Then, right click again on "Double It" and select "Export Service" and save the resulting .kar file locally.

    Before exporting the client job, we need to make one minor change. The default port that the Studio used for the "double it" SOAP service (8090) is different to that of Karaf (8040). Click on "tESBConsumer" and change the port number in the address to "8040". Then after saving, right click on the double it client job and select "Build job". Under "Build Type" select "OSGI bundle for ESB", and click "Finish" to export the job:

    3) Deploy the service and client jobs to the Runtime Container

    Finally, we need to deploy the service and client jobs to the Runtime Container. First, copy the service .kar file into "Runtime_ESBSE/container/deploy". This will automatically deploy the service in Karaf (something that can be verified by running "la" in the console - you should see the service as the last bundle on the list). Then also copy the client jar into the "deploy" directory. The response will be output in the console window (due to the tLogRow component), and the full message can be seen in the server logs ("log/tesb.log"):

    Securing web services using Talend's Open Studio for ESB - part III

    $
    0
    0
    This is the third article in a series on securing web services using Talend's Open Studio for ESB. In the first article, we looked at how to design and test a SOAP web service in the Studio, and how to create a client job to invoke on it. In the second article we looked at deploying the jobs in the Talend ESB runtime container. In this article, we will look at how to secure the SOAP webservice we are deploying in the container, by requiring the client to authenticate using a WS-Security UsernameToken.

    1) Secure the "double-it" webservice by requiring clients to authenticate

    First we will secure the "double-it" webservice we have designed in the Studio in the first article, by requiring clients to authenticate using a WS-Security UsernameToken. Essentially what this means is that the client adds a SOAP header to the request containing username and password values, which then must be authenticated by the service. UsernameToken authentication can be configured for a service in the Studio, by right-clicking on the "DoubleIt 0.1" Service in the left-hand menu and selecting "ESB Runtime Options". Under "ESB Service Security" select "Username/Password". Select "OK" and export the service again as detailed in the second article.

    Now start the container and deploy the modified service. Note that what selecting the "Username/Password" actually does in the container is to enforce the policy that is stored in 'etc/org.talend.esb.job.token.policy', which is a WS-SecurityPolicy assertion that requires that a UsernameToken must always be sent to the service. Now deploy the client job - you will see an error in the Console along the lines of:

    {http://schemas.xmlsoap.org/soap/envelope/}Server|These policy alternatives can not be satisfied:
    {http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702}SupportingTokens
    {http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702}UsernameToken

    This is due to the fact that we have not yet configured the client job to send a
    UsernameToken in the request.

    2) How authentication works in the container

    So far we have required clients to authenticate to the service, but we have not said anything about how the service actually authenticates the credentials that it receives. Apache Karaf uses JAAS realms to handle authentication and authorization. Typing "jaas:realm-list" in the container shows the list of JAAS realms that are installed:

    Here we can see that the (default) JAAS realm of "karaf" has been configured with a number of JAAS Login Modules. In particular, in index 1, the PropertiesLoginModule authenticates users against entries in 'etc/users.properties'. This file contains entries that map a username to a password, as well as an optional number of groups. It also contains entries mapping groups to roles. In this example though we are solely concerned with authentication. The service will extract the username and password from the security header of the request and will compare them to the values in 'etc/users.properties'. If there is a match then a user is deemed to be authenticated and the request can proceed.

    In a real-world deployment, we can authenticate to users stored in a database or in an LDAP directory server, by configuring a JAAS Realm with the appropriate LoginModules (see the Karaf security guide for a list of available Login Modules).

    3) Update the client job to include a UsernameToken

    Finally we have to update the client job to include a UsernameToken in the Studio. Open the "tESBConsumer" component and select "Use Authentication", and then select the "Username Token" authentication type. Enter "tesb" for the username and password values (this is one of the default users defined in 'etc/users.properties' in the container).



    Now save the job and build and deploy it as per the second tutorial. The job request should succeed, with the response message printed in the console. Examining 'log/tesb.log' it is possible to see what the client request looks like:

    In the next article we'll look at authentication using SAML tokens.

    Securing web services using Talend's Open Studio for ESB - part IV

    $
    0
    0
    This is the fourth article in a series on securing web services using Talend's Open Studio for ESB. In the previous article, we looked at how to secure a SOAP webservice in the Talend container, by requiring the client to authenticate using a WS-Security UsernameToken. In this post we will look at an alternative means of authenticating clients using a SAML token, which the client obtains from a Security Token Service (STS) also deployed in the Talend container. This is more sophisticated than the UsernameToken approach, as we can embed claims as attributes in the SAML Assertion, thus allowing the service provider to also make authorization decisions. However, in this article we will just focus on authentication.

    1) Secure the "double-it" webservice by requiring clients to authenticate

    As in the previous article, first we will secure the "double-it" webservice we have designed in the Studio in the first article, by requiring clients to authenticate using a SAML Token, which is conveyed in the security header of the request. SAML authentication can be configured for a service in the Studio, by right-clicking on the "DoubleIt 0.1" Service in the left-hand menu and selecting "ESB Runtime Options". Under "ESB Service Security" select "SAML Token". Select "OK" and export the service again as detailed in the second article.

    Now start the container and deploy the modified service. Note that what selecting the "SAML Token" actually does in the container is to enforce the policy that is stored in 'etc/org.talend.esb.job.saml.policy', which is a WS-SecurityPolicy assertion that requires that a SAML 2.0 token containing an X.509 certificate associated with the client (subject) must be sent to the service. In addition, a Timestamp must be included in the security header of the request, and signed by the private key associated with the X.509 certificate in the Assertion.

    2) Update the client job to include a SAML Token in the request

    Next we have to update the client job to include a SAML Token in the Studio. Open the "tESBConsumer" component and select "Use Authentication", and then select the "SAML Token" authentication type. The propagation options are not required for this task - they are used when a SOAP Service is an intermediary service, and wishes to get a new SAML Token "On Behalf Of" a token that it received. Enter "tesb" for the username and password values (this is one of the default users defined in 'etc/users.properties' in the container). Now save the job and build it.



    3) Start the STS in the container and deploy the client job

    Once the client job has been deployed to the container, it will first attempt to get a SAML Token from the STS. Various properties used by the client to communicate with the STS are defined in 'etc/org.talend.esb.job.client.sts.cfg'. The Talend runtime container ships with a fully fledged STS. Clients can obtain a SAML Token by including a username/password in the request, which the STS in turn authenticates using JAAS (see section 2 in the previous article). Start the STS in container via:
    • tesb:start-sts
    Now deploy the client job, and it should succeed, with the response message printed in the console. The log 'log/tesb.log' includes the client request and service response messages - in the client request you can see the SAML Assertion included in the security header of the message.

    Running the Apache Kerby KDC in docker

    $
    0
    0
    Apache Kerby is a subproject of the Apache Directory project, and is a complete open-source KDC written entirely in Java. Apache Kerby 1.1.1 has been released recently. Last year I wrote a blog post about how to configure and launch Apache Kerby, by first obtaining the source distribution and building it using Apache Maven. In this post we will cover an alternative approach, which is to download and run a docker image I have prepared which is based on Apache Kerby 1.1.1.

    The project is available on github here and the resulting docker image is available here. Note that this is not an official docker image - and so it provided just for testing or experimentation purposes. First clone the github repository and either build the image from scratch or download it from dockerhub:
    • docker build . -t coheigea/kerby
     or:
    • docker pull coheigea/kerby
    The docker image builds a KDC based on Apache Kerby and runs it when started. However, it expects a directory to be supplied as the first argument (defaults to '/kerby-data/conf') containing the configuration files for Kerby. The github repository contains the relevant files in the 'kerby-data' directory. As well as the configuration files, it stores the admin keytab and a JSON file containing the default principals for the KDC.

    Start the KDC by mapping the kerby-data directory to a volume on the container:
    • docker run -it -p 4000:88 -v `pwd`/kerby-data:/kerby-data coheigea/kerby
    Now we can log into the docker image and create a user for our tests:
    • docker exec -it <id> bash
    • stty rows 24 columns 80 (required to run jline in docker)
    • sh bin/kadmin.sh /kerby-data/conf/ -k /kerby-data/keytabs/admin.keytab
    • Then: addprinc -pw password alice@EXAMPLE.COM
    To test the KDC from outside the container you can use the MIT kinit tool. Set the KRB5_CONFIG environment variable to point to the "krb5.conf" file included in the github repository, e.g:
    • export KRB5_CONFIG=`pwd`/krb5.conf
    • kinit alice
    This will get you a ticket for "alice", that can be inspected via "klist".

    Combining Keycloak with the Apache CXF STS

    $
    0
    0
    The Apache CXF STS (Security Token Service) is a web service (both SOAP and REST are supported) that issues tokens (e.g. SAML, JWT) to authenticated users. It can also validate, renew and cancel tokens. To invoke successfully on the STS, a user must present credentials to the STS for authentication. The STS must be configured in turn to authenticate the user credentials to some backend. Another common requirement is to retrieve claims relating to the authenticated user from some backend to insert into the issued token.

    In this post we will look at how the STS could be combined with Keycloak to both authenticate users and to retrieve the roles associated with a given user. Typically, Keycloak is used as an IdM for authentication using the SAML SSO or OpenId Connect protocols. However in this post we will leverage the Admin REST API.

    I have created a project on github to deploy the CXF STS and Keycloak via docker here.

    1) Configuring the STS

    Checkout the project from github. The STS is configured is a web application that is contained in the 'src' folder. The WSDL defines a single endpoint with a security policy that requires the user to authenticate via a WS-Security UsernameToken. The STS is configured in spring. Essentially we define a custom 'validator' to validate the UsernameToken, as well as a custom ClaimsHandler to handle retrieving role claims from Keycloak. We also configure the STS to issue SAML tokens.

    UsernameTokens are authenticated via the KeycloakUTValidator in the project source. This class is configured with the Keycloak address and realm and authenticates received tokens as follows:

    Here we use the Keycloak REST API to search for the user matching the given username, using the given username and password as credentials. What the client API is actually doing behind the scenes here is to obtain an access token from Keycloak using the OAuth 2.0 resource owner password credentials grant, something that can be replicated with a tool like curl as follows:
    • curl --data "client_id=admin-cli&grant_type=password&username=admin&password=password" http://localhost:9080/auth/realms/master/protocol/openid-connect/token -v
    • curl -H "Authorization: bearer <access token>" http://localhost:9080/auth/admin/realms/master/users -H "Accept: application/json" -v
    Keycloak will return a HTTP status code of 401 if authentication fails. We allow the case that Keycloak returns 403 unauthorized, as the user may not be authorized to invoke on the admin-cli client. A better approach would be to emulate Apache Syncope and have a "users/self"endpoint to allow users to retrieve information about themselves, but I could not find an analogous endpoint in Keycloak.

    Role claims are retrieved via the KeycloakRoleClaimsHandler. This uses the admin credentials to search for the (already authenticated) user, and obtains the effective "realm-level" roles to add to the claim.

    2) Running the testcase in docker

    First build the STS war and create a docker image for the STS as follows:
    • mvn clean install
    • docker build -t coheigea/cxf-sts-keycloak . 
    This latter command just deploys the war that was built into a Tomcat docker image via this Dockerfile. Then pull the official Keycloak docker image and start both via docker-compose (see here):
    • docker pull jboss/keycloak
    • docker-compose up
    This starts the STS on port 8080 and Keycloak on port 9080. Log on to the Keycloak administration console at http://localhost:9080/auth/ using the username "admin" and password "password". Click on "Roles" and add a role for a user (e.g. "employee"). The click on "Users" and add a new user. After saving, click on "Credentials" and specify a password (unselecting "Temporary"). Then click on "Role Mappings" and select the role you created above for the user.

    Now we will use SoapUI to invoke on the STS. Download it and create a new SOAP project using the WSDL of the STS (http://localhost:8080/cxf-sts-keycloak/UT?wsdl). Click on 'Issue' and select the request. We need to edit the SOAP Body of the request to instruct the STS to issue a SAML Token with a Role Claim using the standard WS-Trust parameters:

    <ns:RequestSecurityToken>
         <t:TokenType xmlns:t="http://docs.oasis-open.org/ws-sx/ws-trust/200512">http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0</t:TokenType>
         <t:KeyType xmlns:t="http://docs.oasis-open.org/ws-sx/ws-trust/200512">http://docs.oasis-open.org/ws-sx/ws-trust/200512/Bearer</t:KeyType>
         <t:RequestType xmlns:t="http://docs.oasis-open.org/ws-sx/ws-trust/200512">http://docs.oasis-open.org/ws-sx/ws-trust/200512/Issue</t:RequestType>
         <t:Claims xmlns:ic="http://schemas.xmlsoap.org/ws/2005/05/identity" xmlns:t="http://docs.oasis-open.org/ws-sx/ws-trust/200512" Dialect="http://schemas.xmlsoap.org/ws/2005/05/identity">
            <ic:ClaimType xmlns:ic="http://schemas.xmlsoap.org/ws/2005/05/identity" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role"/>
         </t:Claims>
    </ns:RequestSecurityToken>

    Click in the Properties box in the lower left-hand corner and specify the username and password for the user you created in Keycloak. Finally, right click on the request and select "Add WSS UsernameToken" and hit "OK" and send the request. If the request was successful you should see the SAML Assertion issued by the STS on the right-hand side. In particular, note that the Assertion contains a number of Attributes corresponding to the roles of that particular user.


    Securing web services using Talend's Open Studio for ESB - part V

    $
    0
    0
    This is the fifth article in a series on securing web services using Talend's Open Studio for ESB. So far we have seen how to design a SOAP service and client in the Studio, how to deploy them to the Talend runtime container, and how to secure them using a UsernameToken and SAML token. In addition to designing 'jobs', the Studio also offers the ability to create a 'route'. Routes leverage the capabilities and components of Apache Camel, which is a popular integration framework. In this article, we will design a route to invoke on the SAML-secured service we configured in the previous tutorial, instead of using a job.

    1) Create a route to invoke on the "double-it" service

    In the Studio, right-click on 'Routes' in the left-hand pane, and select 'Create Route' and create a new route called 'DoubleItClientRoute'. Select the 'cTimer', 'cSetBody', 'cSOAP' and 'cLog' components from the palette on the right-hand side and drag them into the route window from left to right. Link the components up by right clicking on each component, and selecting 'Row' and then 'Route' and left-clicking on the next component over:


    Now let's configure each component in turn. The 'cTimer' component is used to start the route. You can run the route an arbitrary number of times with a specified delay, or else specify a start time to run the route. For now just enter '1' for 'Repeat' as we want to run the route once. Now click on the 'cSetBody' component. This is used to specify the Body of the request we are going to make on the remote (SOAP) service. For simplicity we will just hard-code the SOAP Body, so select 'CONSTANT' as the Language and input '"<ns2:DoubleItRequest xmlns:ns2=\"http://www.talend.org/service/\">60</ns2:DoubleItRequest>"' for the expression:


    Now we will configure the 'cSOAP' component. First, deploy the SAML-secured SOAP service on the container (see previous tutorial) so that we have access to the WSDL. Double-click 'cSOAP' and enter 'http://localhost:8040/services/DoubleIt?wsdl' for the WSDL and hit the reload icon on the right-hand side and click 'Finish'. We will use the default dataformat of 'PAYLOAD' (the SOAP Body contents we set in 'cSetBody'). Select 'Use Authentication' and then pick "SAML Token". Input 'tesb' for the Username and Password values, and save the route.


    2) Deploy the route to the container

    Right click on the route name in the left-hand pane and select 'Build Route' to build the .kar file. In the container where the SAML-secured service should already be running, start the STS with 'tesb:start-sts', and then copy the client route .kar file into the 'deploy' folder. Consult the log in 'log/tesb.log' and you will see the successful service response as follows:


    Securing web services using Talend's Open Studio for ESB - part VI

    $
    0
    0
    This is the sixth article in a series on securing web services using Talend's Open Studio for ESB. Up to now we have seen how to create and secure a SOAP service, client job and route in the Studio, and how to deploy them to the Talend runtime container. For the remaining articles in this series, we will switch our focus to REST web services instead. In this article we will look at how to implement a REST service and client in the Studio.

    1) Implement a "double-it" REST Service in the Studio

    First let's look at how we can create implement the "double-it" service as a REST service instead. Open the Studio and right click on "Job Designs" and select "Create job". Create a new job called "DoubleItRESTService". Drag the 'tRESTRequest', 'tXMLMap' and 'tRESTResponse' components from the palette into the central window. Connect them by right-clicking on 'tRESTRequest' and selecting "Row / New Output" and drag the link to 'tXMLMap', calling the output 'Request'. Right-click on 'tXMLMap' and select "Row / New Output" and drag the link to 'tRESTResponse', calling the output 'Response':


    Now let's design the REST endpoint by clicking on 'tRESTRequest'. Our simple "double-it" service will accept a path parameter corresponding to the number to double. It will return an XML or JSON response containing the doubled number wrapped in a "result" tag. Edit the 'REST endpoint' to add "/doubleit" at the end of the URL. In the REST API mapping, edit the "URI Pattern" to be "/{number}". Now click on the "Output Flow" for "Request" and click on the three dots that appear. Click the "+" button and change the column name to "number" and the Type to "Integer":


    Click "OK" and then double-click on 'tXMLMap'. Left-click on the "Number" column on the left-hand side, and drag it over to the right-hand side to the "body" column. Select "Add Linker to Target Node". Now click on "Request.number" on the right-hand side and then on the three dots. Change the expression to "2 * Request.number" to implement the "doubling" logic. Finally, rename the "root" element to "result":


    Finally click "OK", save the job and run it. We can test via a console that the job is working OK using a tool such as curl:
    • curl -H "Accept: application/xml" http://localhost:8088/doubleit/15
    • Response: <?xml version="1.0" encoding="UTF-8"?><result>30</result>
    • Response if we ask for JSON: {"result":30}
    2) Implement a "double-it" REST client in the Studio

    Now we'll design a client job for the "double-it" REST service in the Studio. Right-click on "Job Designs" and create a new job called "DoubleItRESTClient". Drag a 'tFixedFlowInput', 'tRESTClient' and two 'tLogRow' components from the palette into the central window. Link the components, sending the 'tRESTClient'
    "Response" to one 'tLogRow' component and the "Error" to the other:

    Now click on 'tFixedFlowInput' and then 'Edit Schema'. Add a new column called "number" of type "Integer", and click "yes" to propagate the changes. In the inline table, add a value for the number. Finally, click on 'tRESTClient' and specify "http://localhost:8088/doubleit/" for the URL, and row1.number for the relative path. Keep the default HTTP Method of "GET" and "XML" for the "Accept Type":


    Now save the job and run it. The service response should be displayed in the window of the run tab. In the next article, we'll look at how to secure this REST service in the Studio when deploying it to the Talend runtime container.

    Securing web services using Talend's Open Studio for ESB - part VII

    $
    0
    0
    This is the seventh and final article in a series on securing web services using Talend's Open Studio for ESB. First we covered how to create and secure a SOAP service, client job and route in the Studio, and how to deploy them to the Talend runtime container. In the previous post we looked instead at how to implement a REST service and client in the Studio. In this post we will build on the previous post by showing some different ways to secure our REST service when it is deployed in the Talend container.

    1) Secure the REST "double-it" webservice using HTTP B/A

    Previously we saw how to secure the SOAP "double-it" service in the container using WS-Security UsernameTokens. In this section we'll also secure our REST service using a username and password that the client supplies - this time using HTTP Basic Authentication. Open the REST service we have created in the Studio, and click on the 'tRESTRequest' component. Select "Use Authentication" and then pick the default "Basic HTTP" option. Save the job and build it by right clicking on the job name and selecting "Build job".


    Start the runtime container and deploy the job. Now open our REST client job in the Studio. Click on the 'tRESTClient' component and select "Use Authentication" as per 'tRESTRequest' above. Select 'tesb' for the username and password (see section 2 of the SAML tutorial for an explanation of how authentication works in the container). Now build the job and deploy it to the container. The client job should successfully run. See below for a log of a successful request where the client credentials can be seen in the "Basic" HTTP header:


    2) Secure the REST "double-it" webservice using SAML

    As for SOAP services, we can also secure our REST webservice using SAML. Instead of having the REST client to create a SAML Assertion, we will leverage the Talend Security Token Service (STS). The REST client will use the same mechanism (WS-Trust) to authenticate and obtain a SAML Token from the Talend STS as for the SOAP-case. Then the REST client inserts the SAML Token into the authorization header of the service request. The service parses the header and validates the signature on the SAML Token in exactly the same way as for the REST request.

    In the Studio, edit the 'tRESTRequest' and 'tRESTClient' components in our jobs as for the "Basic Authentication" example above, except this time select "SAML Token" for "Use Authentication". Save the jobs and build them and deploy the service to the container. Before deploying the client job, we need to start the STS via:
    • tesb:start-sts
    Then deploy the client job and it should work correctly:



    Two new security advsories for Apache CXF

    $
    0
    0
    Two new security advisories have been published recently for Apache CXF:
    • CVE-2018-8039: Apache CXF TLS hostname verification does not work correctly with com.sun.net.ssl.*:
    It is possible to configure CXF to use the com.sun.net.ssl implementation via: System.setProperty("java.protocol.handler.pkgs", "com.sun.net.ssl.internal.www.protocol");

    When this system property is set, CXF uses some reflection to try to make the HostnameVerifier work with the old com.sun.net.ssl.HostnameVerifier interface. However, the default HostnameVerifier implementation in CXF does not implement the method in this interface, and an exception is thrown. However, the exception is caught in the reflection code and not properly propagated.

    What this means is that if you are using the com.sun.net.ssl stack with CXF, an error with TLS hostname verification will not be thrown, leaving a CXF client subject to man-in-the-middle attacks.
    • CVE-2018-8038: Apache CXF Fediz is vulnerable to DTD based XML attacks:
    The fix for advisory CVE-2015-5175 in Apache CXF Fediz 1.1.3 and 1.2.1 prevented DoS style attacks via DTDs. However, it did not fully disable DTDs, meaning that the Fediz plugins could potentially be subject to a DTD-based XML attack.

    In addition, the Apache CXF Fediz IdP is also potentially subject to DTD-based XML attacks for some of the WS-Federation request parameters.
    Please upgrade to the latest releases to pick up fixes for these advisories. The full CVEs are available on the CXF security advisories page.

    Experimenting with Apache CXF Fediz in docker

    $
    0
    0
    I have covered the capabilities of Apache CXF Fediz many times on this blog, giving instructions of how to deploy the IdP or a sample secured web application to a container such as Apache Tomcat. However such instructions can be quite complex, ranging from building Fediz from scratch and deploying the resulting web applications, to configuring jars + keys in Tomcat, etc. Wouldn't it be great to just be able to build a few docker images and launch them instead? In this post we will show how to easily deploy the Fediz IdP and STS to docker, as well as how to deploy a sample application secured using WS-Federation. Then we show how easy it is to switch the IdP and the application to use SAML SSO instead.

    1) The Apache CXF Fediz Identity Provider

    The Apache CXF Fediz Identity Provider (IdP) actually consists of two web applications - the IdP itself which can handle both WS-Federation and SAML SSO login requests, as well as an Apache CXF-based Security Token Service (STS) to authenticate the end users. In addition, we also have a third web application, which is the Apache CXF Fediz OpenId Connect IdP, but we will cover that in a future post. It is possible to build docker images for each of these components with the following project on github:
    • fediz-idp: A sample project to deploy the Fediz IdP
    To launch the IdP in docker, build each of the individual components and then launch using docker-compose, e.g.:
    • cd sts; docker build -t coheigea/fediz-sts .
    • cd idp; docker build -t coheigea/fediz-idp .
    • cd oidc; docker build -t coheigea/fediz-oidc .
    • docker-compose up
    Please note that this project is provided as a quick and easy way to play around with the Apache CXF Fediz IdP. It should not be deployed in production as it uses default security credentials, etc.

    2) The Apache CXF Fediz 'fedizhelloworld' application

    Now that the IdP is configured, we will configure a sample application which is secured using the Fediz plugin (for Apache Tomcat). The project is also available on github here:
    • fediz-helloworld: Dockerfile to deploy a WS-Federation secured 'fedizhelloworld' application
    The docker image can be built and run via:
    • docker build -t coheigea/fediz-helloworld .
    • docker run -p 8443:8443 coheigea/fediz-helloworld
    Now just open a browser and navigate to 'https://localhost:8443/fedizhelloworld/secure/fedservlet'. You will be redirected to the IdP for authentication. Select the default home realm and use the credentials "alice" (password: "ecila2) to log in. You should be successfully authenticated and redirected back to the web application.

    3) Switching to use SAML SSO instead of WS-Federation

    Let's also show how we can switch the security protocol to use SAML SSO instead of WS-Federation. Edit the Dockerfile for the fediz-idp project and uncomment the final two lines (to copy entities-realma.xml and mytomrpkey.cert into the docker image). 'mytomrpkey.cert' is used to validate the Signature of the SAML AuthnRequest, something that is not needed for the WS-Federation case as the client request is not signed. Rebuild the IdP image (docker build -t coheigea/fediz-idp .) and re-launch the IdP again via "docker-compose up".

    To switch the 'fedizhelloworld' application we need to make some changes to the 'fediz_config.xml'. These changes are already made in the file 'fediz_config_saml.xml':

    Copy 'fediz_config_saml.xml' to 'fediz_config.xml' and rebuild the docker image:
    • docker build -t coheigea/fediz-helloworld .
    • docker run -p 8443:8443 coheigea/fediz-helloworld
    Open a browser and navigate to 'https://localhost:8443/fedizhelloworld/secure/fedservlet' again. Authentication should succeed as before, but this time using SAML SSO as the authentication protocol instead of WS-Federation.

    SAML SSO Logout support in Apache CXF Fediz

    $
    0
    0
    SAML SSO support was added to the Apache CXF Fediz IdP in version 1.3.0. In addition, SAML SSO support was added to the Tomcat 8 plugin from the 1.4.4 release. However, unlike for the WS-Federation protocol, support was not included for SAML SSO logout. That's going to change from the next 1.4.5 release. In this post we will cover how logout works in general for both protocols, across both the IdP and Relying Party (RP) plugins.

    1) Logging out from the Apache CXF Fediz IdP

    a) WS-Federation

    Follow the previous post I wrote about experimenting with Apache CXF Fediz in docker and start the Fediz IdP and the 'fedizhelloworld' application (supporting WS-Federation and not SAML SSO) in docker. Login to the 'fedizhelloworld' application (and to the IdP) by navigating to 'https://localhost:8443/fedizhelloworld/secure/fedservlet' in a browser and logging on with credentials of 'alice'/'ecila'.

    We can log out directly to the IdP by navigating to 'https://localhost:10001/fediz-idp/federation?wa=wsignout1.0'. As our IdpEntity configuration in 'entities-realma.xml' has the property "rpSingleSignOutConfirmation" set to "true", a sign out confirmation page is displayed asking us if we want to log out from the 'fedizhelloworld' application.

    If we click on the "Logout" button then what happens next depends on whether we supplied a "wreply" parameter or not. If no parameter is supplied then a successful logout page is shown at the IdP. Otherwise we have the option of supplying a "wreply" parameter to return to the RP application after logout is successful. For this to work, the IdPEntity configuration bean must have the property "automaticRedirectToRpAfterLogout" set to "true". In addition, the "wreply" address must match a regular expression supplied by the "logoutEndpointConstraint" property of the matching "ApplicationEntity" bean for 'fedizhelloworld'.

    b) SAML SSO

    Support was added to the Apache CXF Fediz for SAML SSO logout in the forthcoming 1.4.5 release. The client sends a LogoutRequest to the IdP as follows:
    After checking the Signature and doing some validation on the request (e.g. checking the destination), then a sign out confirmation page is displayed as per the WS-Federation case above (if the property "rpSingleSignOutConfirmation" set to "true). Once the user clicks on "Logout" then either a logout page is displayed on the IdP, or else a LogoutResponse is returned to the client (if the property "automaticRedirectToRpAfterLogout" set to "true"). In addition, the URL to redirect back to must be specified in the 'ApplicationEntity' configuration in "entities-realma.xml" under the property "logoutEndpoint".



    2) Logging out from the RP application

    a) WS-Federation

    Next we'll turn our attention to logging out from the 'fedizhelloworld' application, secured by WS-Federation. Log in again to the application by navigating to 'https://localhost:8443/fedizhelloworld/secure/fedservlet'. There are a number of different ways of logging out from the application:
    • Specify a "wa=wsignout1.0" query parameter. This logs the user out and redirects to the IdP to log the user out there.
    • Specify a "wa=wsignoutcleanup1.0" query parameter. This logs the user out and either redirects to a URL supplied by the "wreply" parameter (which must match the configuration item "logoutRedirectTo" or "logoutRedirectToConstraint"), or alternatively to the "logoutRedirectTo" configuration item if no "wreply" parameter is specified. 
    • If the URL matches the configuration item "logoutURL". The default behaviour here is to log the user out and redirect to the IdP to log the user out there as well.
    Feel free to experiment with these options with 'fedizhelloworld'.

    b) SAML SSO

    Support was added for SAML SSO logout support in the Tomcat plugin for the forthcoming 1.4.5 release. If the user navigates to the logout URL configured in fediz_config.xml ("logoutURL") then the user is logged out and a 'LogoutRequest' is sent to the IdP. If a 'LogoutResponse' is received from the IdP then it is processed and the user is redirected to the page specified in the "logoutRedirectTo" configuration item afterwards.

    Follow the steps in the previous post to change the Fediz IdP and 'fedizhelloworld' docker images to use SAML SSO. When changing the IdP configuration, edit 'entities-realma.xml' and change the value for 'automaticRedirectToRpAfterLogout' to 'true'. Also add the following property to the ApplicationEntity bean for "srv-fedizhelloworld":
    • <property name="logoutEndpoint" value="https://localhost:8443/fedizhelloworld/index.html"/>
    Now log on to the RP via 'https://localhost:8443/fedizhelloworld/secure/fedservlet' and log out via 'https://localhost:8443/fedizhelloworld/secure/logout'. You will be logged out of both the RP and the IdP and redirected to a landing page on the RP side.

    OpenId Connect support for the Apache Syncope admin console

    $
    0
    0
    Apache Syncope is a powerful open source Identity Management project at the Apache Software Foundation. Last year I wrote a blog entry about how to log in to the Syncope admin and end-user web consoles using SAML SSO, showing how it works using Apache CXF Fediz as the SAML SSO IdP. In addition to SAML SSO, Apache Syncope supports logging in using OpenId Connect from version 2.0.9. In this post we will show how to configure this using the docker image for Apache CXF Fediz that we covered recently.

    1) Configuring the Apache CXF Fediz OIDC IdP

    First we will show how to set up the Apache CXF Fediz OpenId Connect IdP. Follow section (1) of this post about starting the Apache CXF Fediz IdP in docker. Once the IdP has started via "docker-compose up", open a browser and navigate to "https://localhost:10002/fediz-oidc/console/clients". This is the client registration page of the Fediz OIDC IdP. Authenticate using credentials "alice" (password "ecila") and register a new client for Apache Syncope using the redirect URI "http://localhost:9080/syncope-console/oidcclient/code-consumer". Click on the registered client and save the client Id and Secret for later:

    2) Configuring Apache Syncope to support OpenId Connect

    In this section, we will cover setting up Apache Syncope to support OpenId Connect. Download and extract the most recent standalone distribution release of Apache Syncope (2.1.1 was used in this post). Before starting Apache Syncope, we need to configure a truststore corresponding to the certificate used by the Apache CXF Fediz OIDC IdP. This can be done on linux via for example:
    • export CATALINA_OPTS="-Djavax.net.ssl.trustStore=./idp-ssl-trust.jks -Djavax.net.ssl.trustStorePassword=ispass"
    where "idp-ssl-trust.jks" is available with the docker configuration for Fediz here. Start the embedded Apache Tomcat instance and then open a web browser and navigate to "http://localhost:9080/syncope-console", logging in as "admin" and "password".

    Apache Syncope is configured with some sample data to show how it can be used. Click on "Users" and add a new user called "alice" by clicking on the subsequent "+" button. Specify a password for "alice" and then select the default values wherever possible (you will need to specify some required attributes, such as "surname"). Now in the left-hand column, click on "Extensions" and then "OIDC Client". Add a new OIDC Client, specifying the client ID + Secret that you saved earlier and click "Next". Then specify the following values (obtained from "https://localhost:10002/fediz-oidc/.well-known/openid-configuration"):
    • Issuer: https://localhost:10002
    • Authorization Endpoint: https://localhost:10002/fediz-oidc/idp/authorize
    • Token Endpoint: https://localhost:10002/fediz-oidc/oauth2/token
    • JWKS URI: https://localhost:10002/fediz-oidc/jwk/keys
    Click "Next". Now we need to add a mapping from the user we authenticated at the IdP and the internal user in Syncope ("alice"). Add a mapping from internal attribute "username" to external attribute "preferred_username" as follows:

    Now log out and select the "Open Id Connect" dialogue that should have appeared. You will be redirected to the Apache CXF Fediz OIDC IdP for authentication and then redirected back to Apache Syncope, where you will be automatically logged in as the user "alice".

    Exploring Apache Knox - part I

    $
    0
    0
    Apache Knox is an application gateway that works with the REST APIs and User Interfaces of a large number of the most popular big data projects. It can be convenient to enforce that REST or browser clients interact with Apache Knox rather than different components of an Apache Hadoop cluster for example. In particular, Apache Knox supports a wide range of different mechanisms for securing access to the backend cluster. In this series of posts, we will look at different ways of securing access to an Apache Hadoop filesystem via Apache Knox. In this first post we will look at accessing a file stored in HDFS via Apache Knox, where the Apache Knox gateway authenticates the user via Basic Authentication.

    1) Set up Apache Hadoop

    To start we assume that an Apache Hadoop cluster is already running, with a file stored in "/data/LICENSE.txt" that we want to access. To see how to set up Apache Hadoop in such a way, please refer to part 1 of this earlier post. Ensure that you can download the LICENSE.txt file in a browser directly from Apache Hadoop via:
    • http://localhost:9870/webhdfs/v1/data/LICENSE.txt?op=OPEN
    Note that the default port for Apache Hadoop 2.x is "50070" instead.

    2) Set up Apache Knox

    Next we will see how to access the file above via Apache Knox. Download and extract Apache Knox (Gateway Server binary archive - version 1.1.0 was used in this tutorial). First we create a master secret via:
    • bin/knoxcli.sh create-master
    Next we start a demo LDAP server that ships with Apache Knox for convenience:
    • bin/ldap.sh start
    We can authenticate using the credentials "guest" and "guest-password" that are stored in the LDAP backend.

    Apache Knox stores the "topologies" configuration in the directory "conf/topologies". We will re-use the default "sandbox.xml" configuration for the purposes of this post. This configuration maps to the URI "gateway/sandbox". It contains the authentication configuration for the topology (HTTP basic authentication), and which maps the received credentials to the LDAP backend we have started above. It then defines the backend services that are supported by this topology. We are interested in the "WEBHDFS" service which maps to "http://localhost:50070/webhdfs". Change this port to "9870" if using Apache Hadoop 3.0.0 as in the first section of this post. Then start the gateway via:
    • bin/gateway.sh start
    Now we can access our file directly via Knox, using credentials of "guest" / "guest-password" via:
    • https://localhost:8443/gateway/sandbox/webhdfs/v1/data/LICENSE.txt?op=OPEN
    Or alternatively using Curl:
    • curl -u guest:guest-password -kL https://localhost:8443/gateway/sandbox/webhdfs/v1/data/LICENSE.txt?op=OPEN

    Exploring Apache Knox - part II

    $
    0
    0
    This is the second in a series of blog posts exploring some of the security features of Apache Knox. The first post looked at accessing a file stored in HDFS via Apache Knox, where the Apache Knox gateway authenticated the user via Basic Authentication. In this post we will look at authenticating to the REST API of Apache Knox using a token rather than using Basic Authentication. Apache Knox ships with a token service which allows an authenticated user to obtain a token, which can then be used to invoke on the REST API.

    1) Set up the Apache Knox token service

    To start with, follow the first tutorial to set up Apache Knox as well as the backend Apache Hadoop cluster we are trying to obtain a file from. Now we will create a new topology configuration file in Apache Knox to launch the token service. Copy "conf/topologies/sandbox.xml" to a new file called "conf/topologies/token.xml". Leave the 'gateway/provider' section as it is, as we want the user to authenticate to the token service using basic authentication as for the REST API in the previous post. Remove all of the 'service' definitions and add a service definition for the Knox token service, e.g.:
    Restart Apache Knox. We can then obtain a token via the token service as follows using curl:
    • curl -u guest:guest-password -k https://localhost:8443/gateway/token/knoxtoken/api/v1/token
    This returns a JSON structure containing an access token (in JWT format), as well as a "token_type" attribute of "Bearer" and an expiry timestamp. The access token itself can be introspected (via e.g. https://jwt.io/). In the example above, it contains a header "RS256" indicating it is a signed token (RSA + SHA-256), as well as payload attributes identifying the subject ("guest"), issuer ("KNOXSSO") and an expiry timestamp.

    2) Invoking on the REST API using a token

    The next step is to invoke on the REST API using a token, instead of using basic authentication as in the example given in the previous tutorial. Copy "conf/topologies/sandbox.xml" to "conf/topologies/sandbox-token.xml". Remove the Shiro provider and instead add the following provider:
    Now restart the Apache Knox gateway again. First obtain a token using curl:
    • curl -u guest:guest-password -k https://localhost:8443/gateway/token/knoxtoken/api/v1/token
    Copy the access token that is returned. Then you can invoke on the REST API using the token as follows:
    • curl -kL -H "Authorization: Bearer <access token>" https://localhost:8443/gateway/sandbox-token/webhdfs/v1/data/LICENSE.txt?op=OPEN

    Exploring Apache Knox - part III

    $
    0
    0
    This is the third in a series of blog posts exploring some of the security features of Apache Knox. The previous post looked at accessing a file stored in HDFS via Apache Knox, where the Apache Knox gateway authenticated the user using a (JWT) token obtained from the Knox token service. However, the token enforcement in the Knox REST API is not tightly coupled to the Knox token service, a third-party JWT provider can be used instead. In this post, we will show how to authenticate a user to Apache Knox using a token obtained from the Apache CXF Security Token Service (STS).

    1) Deploy the Apache CXF STS in docker

    Apache CXF ships with a powerful and flexible STS that can issue, renew, validate, cancel tokens of different types via the (SOAP) WS-Trust interface. In addition, it also has a flexible REST interface. I created a sample github project which builds the CXF STS with the REST interface enabled:
    • sts-rest: Project to deploy a CXF REST STS web application in docker
    The STS is configured to authenticate users via HTTP Basic authentication, and it can issue both JWT and SAML tokens. Clone the project, and then build and deploy the project in docker using Apache Tomcat as follows:
    • mvn clean install
    • docker build -t coheigea/cxf-sts-rest .
    • docker run -p 8080:8080 coheigea/cxf-sts-rest
    To test it's working correctly, open a browser and obtain a SAML and JWT token respectively via the following GET requests (authenticating using "alice" and "security"):
    • http://localhost:8080/cxf-sts-rest/SecurityTokenService/token/saml
    • http://localhost:8080/cxf-sts-rest/SecurityTokenService/token/jwt
    2) Invoking on the REST API of Apache Knox using a token issued by the STS

    Now we'll look at how to modify the previous tutorial so that the REST API is secured by a token issued by the Apache CXF STS, instead of the Knox token service. To start with, follow the first tutorial to set up Apache Knox as well as the backend Apache Hadoop cluster we are trying to obtain a file from. Then follow part (2) of the previous tutorial to set up the "sandbox-token" topology. Now copy "conf/topologies/sandbox-token.xml" to "conf/topologies/sandbox-token-cxf.xml". We need to make a few changes to the "JWTProvider" to support validating tokens issued by the CXF STS.

    Edit "conf/topologies/sandbox-token.xml" and add the following parameters to the "JWTProvider", i.e.:
    "knox.token.verification.pem" is the PEM encoding of the certificate to be used to verify the signature on the received token. You can obtain this in the sts-rest project in github here, simply paste in the content between the "-----BEGIN/END CERTIFICATE-----" into the parameter vaue. "jwt.expected.issuer" is a constraint on the "iss" claim of the token.

    Now save the topology file and we can get a token from CXF STS using curl as follows:
    • curl -u alice:security -H "Accept: text/plain" http://localhost:8080/cxf-sts-rest/SecurityTokenService/token/jwt
    Save the (raw) token that is returned. Then invoke on the REST API using the token as follows:
    • curl -kL -H "Authorization: Bearer <access token>" https://localhost:8443/gateway/sandbox-token-cxf/webhdfs/v1/data/LICENSE.txt?op=OPEN
    Viewing all 249 articles
    Browse latest View live


    Latest Images