wiki:drmsSubscribe

Version 7 (modified by niles, 10 years ago) (diff)

--

Subscription and slony logs

Subscription

When a site 'subscribes' to a series, the process is initiated by the site sending a message to JSOC telling them that they're interested in that series. JSOC sends back the commands to build the table, and populate it to the current slony checkpoint. Then JSOC adds the table name to their list of what's being subscribed to by that site.

The only way to know if you're subscribed is to actually look in the lists at JSOC, which are named according to the convention:

/data/pgsql/slon_logs/live/etc/(site).lst

Slony logs

At JSOC, the logs are written to the directory :

/data/pgsql/slon_logs/live/site_logs/(site)

The slony logs have names according to the convention :

slony1_log_2_00000000000000690325.sql

Where the number 690325 is a counter that increments for each file. The term "log" is perhaps somewhat misleading with respect to these files, as they are not warnings or errors printed as the database operates. Rather, they are lists of database operations that have taken place at JSOC. If other sites are to keep in sync with JSOC, they must apply the same operations to their local databases. A typical log file may have entries like :

insert into "hmi"."rdvpspec_fd05" ("recnum","sunum","slotnum","sessionid","sessionns","ln_source_isset","ln_source_carrrot","ln_source_cmlon_index","ln_source_lonhg_index","ln_source_lathg_index",
"ln_source_loncm_index","cparms_sg000","logp_bzero","logp_bscale","carrrot","cmlon","lonhg","lathg","loncm","crpix1","crpix2","cdelt1","cdelt2","cdelt3","delta_k","delta_nu","d_omega","module",
"source","input","created","bld_vers","log_base","datamin","datamax","apode_f","apod_min","apod_max","cmlon_index","loncm_index","lathg_index","lonhg_index","sg_000_file","history") values
('9919708','528962345','0','27939659','su_rsb','1','2146','32','48','41','-32','','-8.48432540893554688','0.000635004483736478385','2146','200','120','-12.5','80','64.5','64.5',
'0.101023711','0.101023711','0.000181805124','0.101023711','28.9351845','0.000181805124','pspec3 v 1.1','hmi.rdVtrack_fd05[2146][200][120.0][-12.5][-80.0]',
'hmi.rdVtrack_fd05[2146][200]','1171214459','V8R2','2.71828182845904509','-29.1219711','12.1533213','0.96875','0.9765625','1','32','-32','41','48','logP.fits','');

insert into "hmi"."rdvpspec_fd05" ("recnum","sunum","slotnum","sessionid","sessionns","ln_source_isset","ln_source_carrrot","ln_source_cmlon_index","ln_source_lonhg_index","ln_source_lathg_index",
"ln_source_loncm_index","cparms_sg000","logp_bzero","logp_bscale","carrrot","cmlon","lonhg","lathg","loncm","crpix1","crpix2","cdelt1","cdelt2","cdelt3","delta_k","delta_nu","d_omega","module",
"source","input","created","bld_vers","log_base","datamin","datamax","apode_f","apod_min","apod_max","cmlon_index","loncm_index","lathg_index","lonhg_index","sg_000_file","history") values
('9919709','528962345','1','27939659','su_rsb','1','2146','32','48','42','-32','','-9.26125526428222656','0.000644568281907301633','2146','200','120','-15','-80','64.5','64.5',
'0.101023711','0.101023711','0.000181805124','0.101023711','28.9351845','0.000181805124','pspec3 v 1.1','hmi.rdVtrack_fd05[2146][200][120.0][-15.0][-80.0]',
'hmi.rdVtrack_fd05[2146][200]','1171214461','V8R2','2.71828182845904509','-30.2097244','11.6872139','0.96875','0.9765625','1','32','-32','42','48','logP.fits','');

After a while (daily?), these slony log files are archived into a bundle, named like so :

slony_logs_688467-689890.tar.gz

And then after a longer period of time - two weeks? - these archives of slony log files are deleted. It is critical that the slony logs are applied at remote sites before they deleted at JSOC.

Slony and the DRMS database

The small table _jsoc.sl_archive_tracking contains information about the most recent slony log ingested :

prompt> psql nso_drms

nso_drms=# select * from _jsoc.sl_archive_tracking;
 at_counter |         at_created         |        at_applied
------------+----------------------------+---------------------------
     886400 | 2014-06-27 09:36:17.452634 | 2014-06-27 16:37:03.71184

Subscription at your site

At your site will be a file named subscribe_series - it will be in the same directory as the get_slony_logs.pl script (at NSO it is in /datarea/production/subscribe_series).

To subscribe to a series you need to edit the config file - etc/subscribe_list.cfg at NSO - and then run it like so :

./subscribe_series ./etc/repclient.live.cfg ./etc/subscribe_list.cfg ~/.ssh-agent_rs.sh

The format of etc/subscribe_list.cfg is :

hmi.rdvflows_fd30_frame subscribe
hmi.rdvflows_fd15_frame subscribe
hmi.fsvbinned_nrt subscribe
hmi.rdVfitsc_fd05 subscribe
hmi.rdVfitsc_fd15 subscribe
hmi.rdVfitsc_fd30 subscribe
hmi.rdVfitsf_fd05 subscribe
hmi.rdVfitsf_fd15 subscribe
hmi.rdVfitsf_fd30 subscribe

Manual cleanup before subscription

Sometimes it becomes necessary to manually clean out after a failed attempt at subscription, since if tables are partially built it can block subsequent subscription attempts. To do this for the hmi.v_720s series, the following needs to be done in the database :

DROP TABLE hmi.v_720s;
DROP SEQUENCE hmi.v_720s_seq;
DELETE FROM hmi.drms_series where seriesname = 'hmi.v_720s';
DELETE FROM hmi.drms_segment where seriesname = 'hmi.v_720s';
DELETE FROM hmi.drms_keyword where seriesname = 'hmi.v_720s';

NOTE that you MUST then subscribe. Otherwise your slony updates may pertain to this table, which could cause slony updates to fail.

If you miss more than about a month's worth of slony updates, it will be necessary to unsubscribe and resubscribe to regenerate your data tables. Obviously, this is to be avoided.