For those who may not know, GPFS has a policy engine with SQL like syntax that can be used to create rules for data management and movement. Rules can be created to transparently compress/uncompress files of a certain extension or in a certain directory. Rules can be created to automatically migrate files between storage tiers (hdd, ssd, etc) within GPFS.
Does anyone know of a method or set of scripts that can be integrated into GPFS policy engine to migrate cold data to an NFS share, on say NetApp? This is actually gridscaler but instead of migrating old files to DDN WOS (Web Object Storage) I would rather migrate the data entirely out of the ecosystem. When accessing the stub the file is transparently migrated back, uncompressed, etc. depending on the original operation performed. Very neat.
Rysnc is a possibility of course /but/ with policy engine integration it will at least leave a stub behind so users can access and keep a unified name space.
The idea is to take advantage of a built-in GPFS feature but to use it more as a data management tool since we just happen to have some space on a nearby NFS server.