Skip to content

Using the credential delegation service instead of generated proxy for Xrootd TPC source to dCache

Jürgen Starek edited this page Jan 22, 2019 · 12 revisions

Obsolete documentation

This wiki contains various bits of information that have meanwhile been integrated into our main body of documentation, The dCache Book. These texts will be removed from here during early 2019 in order to avoid fragmentation of the documentation.


DESIGN CONSIDERATIONS

We begin by noting the following:

  1. using the PEMCredential currently generated from the host certificate has the disadvantage that all pool DNs must be distributed to all the grid-mapfiles used by all the SLAC servers we would transfer files from into dCache.
  2. SLAC is working on delegation from the xrdcp client, which presumably would pass the credential in some form in the request; hence extraction will probably be done on the door, because after redirect to the pool, the xrdcp client will not be using GSI.
  3. this means that in order to facilitate the change from an intermediate solution (using the delegation service) to using the proxy given us by xrdcp, it makes most sense to implement the extraction in the door, and pass the credential in serialized form via message (or as part of a serialized object contained in the message) to the pool's mover. A good candidate for this is the ProtocolInfo object, which is constructed by the Transfer and given to the pool mover via a message.
  4. since the delegation service interface belongs to dCache proper, we do not want a dependency on in it the xrootd4j library; this constraint means that we cannot encapsulate the call inside the GSIAuthenticationHandler; instead, we need some way of identifying the protocol that was used to log the client in at the layer above this inside of the dCache Xrootd door.

HOW TO IMPLEMENT

Webdav currently makes use of the delegation service by injecting the client into the 3rd-party-copy-filter:

<bean id="credential-service-stub" class="org.dcache.cells.CellStub">
      <description>Credential service communication stub</description>
      <property name="destination" value="${webdav.credential-service.topic}"/>
      <property name="timeout" value="${webdav.credential-service.timeout}"/>
      <property name="timeoutUnit" value="${webdav.credential-service.timeout.unit}"/>
</bean>
...
<bean id="credential-service-client" class="org.dcache.webdav.transfer.CredentialServiceClient">
      <description>Client for credential service</description>
      <property name="topicStub" ref="credential-service-stub"/>
</bean>
...
<bean id="3rd-party-copy-filter" class="org.dcache.webdav.transfer.CopyFilter">
      <description>Handles requests for 3rd-party copies</description>
      <property name="credentialServiceClient" ref="credential-service-client"/>
      ...
  </bean>

Similarly, we can make this client available to the Xrootd server (as a field of the XrootdDoor):

<bean id="credential-service-stub" class="org.dcache.cells.CellStub">
      <description>Credential service communication stub</description>
      <property name="destination" value="${xrootd.credential-service.topic}"/>
      <property name="timeout" value="${xrootd.credential-service.timeout}"/>
      <property name="timeoutUnit" value="${xrootd.credential-service.timeout.unit}"/>
</bean>
<bean id="credential-service-client" class="org.dcache.webdav.transfer.CredentialServiceClient">
      <description>Client for credential service</description>
      <property name="topicStub" ref="credential-service-stub"/>  
</bean>

<bean id="door" class="org.dcache.xrootd.door.XrootdDoor">
    <description>Gateway between xrootd protocol handler and dCache</description>
    ...
    <property name="credentialServiceClient" ref="credential-service-client"/> 

Placing the client in the dCache XrootdDoor makes it directly available to the door when a transfer object is created.

The strategy which seems to make most sense is a greedy + fail later one. That is, for all write transfers, the Subject is used to try to pull in a proxy and pass it to the mover. If there is no corresponding proxy, it is left unset; if the operation is third-party, it will eventually fail downstream.

To get an X509 proxy, webdav does this:

return _credentialService.getDelegatedCredential(
                    dn, Objects.toString(Subjects.getPrimaryFqan(subject), null),
                    20, MINUTES);

The relevant info is extracted from the Subject (DN, primary Fqan). There is an additional consideration, here, however. Even though currently the only Crypto protocol supported by standard dCache modules for Xrootd is GSI, there should still be some minimal encapsulation as to the kind of proxy sought.

It would also be convenient to optimize here using a Guava cache in the door from which the proxies are evicted every so often. The cache loader would do the fetching from the delegation service according to some Enum parameter passed in as to type. The tricky part may be determining what this parameter is. This needs a little further investigation (it may be possible to determine this from the public keys in the Subject).

As noted already, the process of passing the proxy along to the pool occurs during the doOnOpen of the XrootdRedirectHandler; in particular, when TPC to dCache is involved, the code path generates a write transfer to the pool:

                transfer = _door.write(remoteAddress, path,
                        ioQueue, uuid, createDir, overwrite, size, _maximumUploadSize,
                        localAddress, req.getSubject(), _authz, persistOnSuccessfulClose,
                        ((_isLoggedIn) ? _userRootPath : _rootPath));

Under the covers, in the XrootdDoor, this generates a transfer, but also messages with the selected pool to start a mover on its behalf:

transfer.selectPoolAndStartMover()

The Transfer in turn eventually calls startMoverAsync, which creates and embeds the ProtocolInfo object.

ProtocolInfo protocolInfo = getProtocolInfoForPool();
        PoolIoFileMessage message;
        if (isWrite()) {
            long allocated = _allocated;
            if (allocated == 0 && fileAttributes.isDefined(SIZE)) {
                allocated = fileAttributes.getSize();
            }
            message =
                    new PoolAcceptFileMessage(pool.getName(), protocolInfo, fileAttributes, pool.getAssumption(), _maximumSize, allocated);
        } 

This last method is inside the Transfer object; thus the XrootdDoor needs to pass the proxy into the Transfer upon construction, and the Transfer in turn will place it in the ProtocolInfo implementation (XrootdProtocolInfo) to be included in the PoolAcceptFileMessage message.

On the other side of the message communication, the XrootdPoolRequestHandler's doOnOpen method can easily pull out the proxy from the protocol info when the write is TPC:

                } else if ((msg.isNew() || msg.isReadWrite()) && isWrite) {
                        boolean posc = (msg.getOptions() & kXR_posc) == kXR_posc ||
                                        file.getProtocolInfo().getFlags()  <<<<<<<<<<<<<<<<<<<<<<<<< 
                                            .contains(XrootdProtocolInfo.Flags.POSC);
                        if (opaqueMap.containsKey("tpc.src")) {
                            _log.trace("Request to open {} is as third-party destination.", msg);
                            descriptor = new TpcWriteDescriptor(file, posc, ctx,
                                                                _server,
                                                                opaqueMap.get("org.dcache.xrootd.client"),
                                                                new XrootdTpcInfo(opaqueMap));
                    } else {
                        descriptor = new WriteDescriptor(file, posc);
                    }
                }

The proxy would be included in the constructor for the TpcWriteDescriptor, which would then use it to construct the XrootdTpcClient.

The final step would then simply allow access to it directly by the GSIClientAuthenticationHandler (or any other handler, if the code is to be generic), into which the client is already injected, rather than having the proxy be constructed by the GSIClientAuthenticationFactory as it currently is; the pipeline is built in the XrootdTpcClient:

 private void injectHandlers(ChannelPipeline pipeline,
                                List<ChannelHandlerFactory> plugins,
                                TpcSourceReadHandler readHandler)
    {
        pipeline.addLast("decoder", new XrootdClientDecoder(this));
        pipeline.addLast("encoder", new XrootdClientEncoder(this));
        AbstractClientRequestHandler handler = new TpcClientConnectHandler();
        handler.setClient(this);
        pipeline.addLast(handler);
        for (ChannelHandlerFactory factory : plugins) {
            ChannelHandler authHandler = factory.createHandler();
            if (authHandler instanceof AbstractClientRequestHandler) {
                ((AbstractClientRequestHandler)authHandler).setClient(this); <<<<<<<<
            }
            pipeline.addLast(authHandler);
        }
        readHandler.setClient(this);
        pipeline.addLast(readHandler);
    }

DIAGRAM OF INTERACTIONS

Note: in the diagram above, the cache and cache loader should actually be inside the door component. There will probably be a containing class holding the cache plus the delegation service client.