This page discusses how to get feedtime running on a normal linux distro.
I have just completed running this on Mandriva 2010. You need to use version 20100501-1 or better. It runs headless (ie there is no GUI), but the configuration files are fairly easy to edit. (note the format chosen was not intended for direct human editing but its just a CSV file with '|' separator) The GUI on the NMT is very simple, and I suspect a competent web developer could get something working in a evening or two. See GUIPort for more details on creating a GUI. But first read this page to get a feel for how everything fits together.
Feedtime is designed to run on busybox, and uses a subset of commands that are found on almost every distro. It is implemented primarily in bash and awk, with wget doing the heavy lifting.
Feedtime has two main program components.
Download the latest csi package.
unzip csi_feedtime-20100430-2.zip mkdir feedtime cd feedtime tar xvf ../feedtime-20100430-2.tar ./feedtime.sh install
Now verify it has installed using 'crontab -l' you should see a crontab entry for feedtime.
*/30 * * * * /path/to/feedtime/feedtime.sh cron #feedtime
run ./feedtime.sh on its own to get a list of commands. The main ones are
'start' : Start the feedtime scanner - the installed cronjob will scan each time it is run. 'stop' : Stop the feedtime scanner - the installed cron job will exit immediately when it runs. 'test' : Start in test mode - nzbs not downloaded - more diagnostics in log files. 'now' : Force an immediate scan of the rss feed 'showlog' : Display results of last scan 'clearlog': Clear log file 'skip' : Skip over all nzbs on the feed. update_schedule : Update when cronjob runs according to cfg file update_nzbget_schedule: Update when nzbget is paused according to cfg file check_nzbget_schedule: Set nzbget activity according to current time - called from cron The output appears on the log files ./data/feedtime.out
All all downloaded or broken nzbs appear in ./data/feedtime.history
These files contain a url that can be used to re-initiate the download.
Feedtime data files. All data files a text files.
schedule_mins="*/30" schedule_hrs="*" liveMatchLimit="20" concurrent_rss="1" concurrent_nzb="0" group_by=""
This is a static file. If a setting is not in the main feedtime.cfg file, it's default value is taken from the first value after the '|'.
Note nzbget has it's own scheduling so the nzbget peak hours settings can be ignored. Feedtime was originally developed against earlier versions of nzbget which are still shipped with NMT applications.
max_size_gb:Maximum NZB Size in Gb|4|6|8|10|12|16|20|- concurrent_nzb:Process all nzbs for a feed at the same time|1|0 concurrent_rss:Process all feeds at the same time|0|1 group_by:Tags that determine distinct <a href="http://code.google.com/p/feedtime/wiki/PriorityGroups#group_by" target="_blank">nzb groups</a>| group_priority:Tags to calculate nzb priority within a <a href="http://code.google.com/p/feedtime/wiki/PriorityGroups#group_priority" target "_blank">group</a>|720p:10,proper:1,repack:1,immerse:-1 par_percent:Minimum par percentage. At time of writing, fake posts often have 3% or less|0|1|2|3|4|5|10 liveMatchLimit:Limit number of nzbs matched per check. Prevents accidentally downloading everything.|10|20|50|100|500 schedule_hrs:Hours setting. Hours to run. Eg. 3,6,9,12 or *=Every hour or */2=Every 2 hours. On automated feeds run every hour|* schedule_mins:minutes past the above hour setting. Eg. 10,40 means 10 and 40 minutes past the hour. */15 = every 15 minutes. |0,30 nzbget_weekday_pause_hour:Start of weekday peak hours. Stop nzbget downloading|-|0|1|2|3|4|5|6|7|8|9|10|11|12|13|14|15|16|17|18|19|10|21|22|23 nzbget_weekday_unpause_hour:End of weekday peak hours. Start nzbget downloading|-|0|1|2|3|4|5|6|7|8|9|10|11|12|13|14|15|16|17|18|19|10|21|22|23 nzbget_weekend_pause_hour:Start of weekend peak hours. Stop nzbget downloading|-|0|1|2|3|4|5|6|7|8|9|10|11|12|13|14|15|16|17|18|19|10|21|22|23 nzbget_weekend_unpause_hour:End of weekend peak hours. Start nzbget downloading|-|0|1|2|3|4|5|6|7|8|9|10|11|12|13|14|15|16|17|18|19|10|21|22|23 nzbget_check_frequency:Check nzbget is adhering to schedule (minutes).|0|15|30|60|90
On first install this is created from ./data/feeds2.example
This has the format
1|nzbindex-wdl|0|https://www.nzbindex.nl/rss/?q=web-dl&minsize=50&complete=1&more=1&max=50|1 2|nzbindex|0|http://www.nzbindex.nl/rss/alt.binaries.multimedia/?q=hdtv+-dvdrip+-bluray+-complete+-channel+-geographic&hidespam=1&complete=1&max=50&more=1&minsize=50&maxsize=4000|0 3|nzbclub|0|http://www.nzbclub.com/nzbfeed.aspx?ss=hdtv+-dvdrip+-channel+-complete+-geographic&us=alt.binaries.multimedia&sz=13&ez=24&sp=1&sa=1|2 4|nzbmatrix|0|http://rss.nzbmatrix.com/rss.php?subcat=41&term=hdtv+-complete+-dvdrip+-dvdr+-bluray|1 5|nzbsorg|0|http://nzbs.org/rss.php?catid=14|1 6|nzbsrus|0|https://www.nzbsrus.com/rssfeed.php?cat=91,72,75|1 7|tvbinz|0|https://tvbinz.net/rss_new.php|1 8|nzbindex-hd|0|http://www.nzbindex.nl/rss/alt.binaries.multimedia/?q=720p+hdtv+-dvdrip+-bluray+-complete+-channel+-geographic&hidespam=1&complete=1&max=50&more=1&minsize=50&maxsize=4000|0 9|nzbclub|0|http://www.nzbclub.com/nzbfeed.aspx?ss=720p+hdtv+-dvdrip+-channel+-complete+-geographic&us=alt.binaries.multimedia&sz=13&ez=24&sp=1&sa=1|2 10|nzbmatrix-api|0|http://rss.nzbmatrix.com/rss.php?page=download&subcat=41&term=hdtv+-complete+-dvdrip+-dvdr+-bluray|1
For each feed the gui will create a folder ./data/feed_feedid . The gui only creates the folder if the user is trying to edit that feed and the folder is not present. If you are not using a gui, You will have to do this manually. This folder will hold the following files:
This file holds a list of tv name patterns - an nzb must match at lease one of these patterns. An example filename is ./data/feed_4/tv.list
Each line is converted to a pattern which is matched against Subject titles
By default patterns start in (+) mode. Examples: "Doctor Who-Confidential" Will match "Doctor Who" but not the "confidential" episodes. "-Confidential+Doctor Who" Will do the same thing (note that "Who" is part of the "+" text. Also due to popular naming convention "Doctor Who S?" will also limit matches to the main series. Feedtime looks inside each nzb to do intelligent filtering of "sample" nzbs. There is no need to filter these manually.
24 S? 30 rock 8 out of 10 cats 90210 S? American Dad S? Bones s Breaking Bad s? Burn Notice S? Californication s? Chuck s? CSI Dexter S? ustle S Leverage Lie To Me Szcocks nova s? Nurse Jackie Psych S? Royal Pains Ugly Betty s? Ultimate Fighter s? United States of Tara S? waterloo road Weeds s? would i lie to you
Note for shows that have simple one word titles like '24' you should add some more text to help select the right shows. I use '24spaceS' which should match the start of the episode identifier (eg 24 S04E03 ) The syntax is one pattern per line.
The tv.global file has the same format as the tv.list file. nzb files must match every rule in the global file. An example filename is ./data/feed_4/tv.global
I typically use this to exclude stuff
-dvdrip -bdrip -bluray -complete
Note it is more efficient to do global excludes in the main rss feed. The syntax varies but it is usually obtained by adding '+-somestring' to the search term in the url. I filter both at the rss url and within feedtime, just to avoid the risk of downloading complete BD rips of old seasons.
This contains the site definitions. There is an example in sites2.example.
It has the format
id|domain|enabled|username|password|login url|post data|logout url|sed script
Note the use of '|' so be careful with fancy sed scripts!
When processing an RSS url, Feedtime will look at the last two parts of the domain name to see if it is present in this table. If present and enabled, then if the user name is present feedtime will try to log in using 'wget --save-session-cokkies --post-data ...' If the 'sed script' (aka link transformation , url rewrite rule) is present, then feedtime will run the rss link through sed before downloading.
For nzbindex I had to add a special macro @UNIXTIME@ to make sure the history links work. (nzbindex links timeout after a while, but for normal feedtime operation it's probably not necessary.
Note in the post data do not change @USERNAME@ or @PASSWORD@, these are place holders that feedtime looks for, and substitutes in the actual username,password which should be in fields 4 and 5, if required.
5|nzbclub.com|1||||||s/_view/_download/ 1271789784|nzbindex.nl|1||||||s/-[0-9]+/-@UNIXTIME@/ 1|nzbmatrix.com|1|||https://nzbmatrix.com/account-login.php|username=@USER@&password=@PASSWORD@|https://nzbmatrix.com/account-logout.php|s/details/download/ 2|nzbs.org|1|||https://www.nzbs.org/user.php|action=dologin&location=/index.php&username=@USER@&password=@PASSWORD@|https://www.nzbs.org/user.php?action=logout|s/action=view/action=getnzb/ 3|nzbsrus.com|1|||https://www.nzbsrus.com/takelogin.php|username=@USER@&password=@PASSWORD@|https://firstname.lastname@example.orgemail@example.com/@;s/.hit=1//;s@$@/filename.nzbdlnzb@ 4|tvbinz.net|1|||https://tvbinz.net/login.php|username=@USER@&password=@PASSWORD@&login=Login|https://tvbinz.net/collections.php?act=logout|
Feedtime has two log files. - A recent activity log ./data/feedtime.out which is re-created each time feedtime is run. This contains more detail regarding what was scanned etc. Esp when scanned in test mode.
- A history log - contains all nzb downloads. It is cleared manually or when it reaches 500K.