GPFS to NFS Migration using Policy Engine

For those who may not know, GPFS has a policy engine with SQL like syntax that can be used to create rules for data management and movement. Rules can be created to transparently compress/uncompress files of a certain extension or in a certain directory. Rules can be created to automatically migrate files between storage tiers (hdd, ssd, etc) within GPFS.

Does anyone know of a method or set of scripts that can be integrated into GPFS policy engine to migrate cold data to an NFS share, on say NetApp? This is actually gridscaler but instead of migrating old files to DDN WOS (Web Object Storage) I would rather migrate the data entirely out of the ecosystem. When accessing the stub the file is transparently migrated back, uncompressed, etc. depending on the original operation performed. Very neat.

Rysnc is a possibility of course /but/ with policy engine integration it will at least leave a stub behind so users can access and keep a unified name space.

The idea is to take advantage of a built-in GPFS feature but to use it more as a data management tool since we just happen to have some space on a nearby NFS server.

Not sure if this is what you’re looking for, but I’ve written a parallel rsync wrapper that can honor GPFS-generated sets of files so instead of having rsync or fpart completely recurse your file system, you can send the GPFS-generated lists to parsyncfp to be operated on immediately. See ‘Options for using filelists’ in the above link.
Apologies if this is not appro to what you’re looking for.
Alternatively, the commercial Starfish will track all your files (historically and predictively) and you can tell it where to put them, from other FSs to the cloud , under what conditions, etc. Not cheap, but worth it. (I have no fiduciary relationship with Starfish besides being an impressed beta tester.)