[Python-modules-commits] r7687 - in packages/nodebox-web/trunk (15 files)

sez-guest at users.alioth.debian.org sez-guest at users.alioth.debian.org
Sat Feb 21 20:05:01 UTC 2009


    Date: Saturday, February 21, 2009 @ 20:05:00
  Author: sez-guest
Revision: 7687

[svn-inject] Applying Debian modifications to trunk

Added:
  packages/nodebox-web/trunk/debian/
  packages/nodebox-web/trunk/debian/changelog
  packages/nodebox-web/trunk/debian/compat
  packages/nodebox-web/trunk/debian/control
  packages/nodebox-web/trunk/debian/copyright
  packages/nodebox-web/trunk/debian/dirs
  packages/nodebox-web/trunk/debian/examples
  packages/nodebox-web/trunk/debian/install
  packages/nodebox-web/trunk/debian/patches/
  packages/nodebox-web/trunk/debian/patches/adjust_imports.diff
  packages/nodebox-web/trunk/debian/patches/remove-hardwired-newsfeed.diff
  packages/nodebox-web/trunk/debian/patches/series
  packages/nodebox-web/trunk/debian/patches/standalone_examples.diff
  packages/nodebox-web/trunk/debian/pyversions
  packages/nodebox-web/trunk/debian/rules


Property changes on: packages/nodebox-web/trunk/debian
___________________________________________________________________
Name: mergeWithUpstream
   + 1

Added: packages/nodebox-web/trunk/debian/changelog
===================================================================
--- packages/nodebox-web/trunk/debian/changelog	                        (rev 0)
+++ packages/nodebox-web/trunk/debian/changelog	2009-02-21 20:05:00 UTC (rev 7687)
@@ -0,0 +1,19 @@
+nodebox-web (1.9.4.3-1) unstable; urgency=low
+
+  * New Upstream Version
+  * Drop patches adopted upstream.
+
+ -- Serafeim Zanikolas <serzan at hellug.gr>  Fri, 20 Feb 2009 22:10:35 +0000
+
+nodebox-web (1.9.2-2) unstable; urgency=low
+
+  * Add empty __init__.py to fix import failure.
+
+ -- Serafeim Zanikolas <serzan at hellug.gr>  Thu, 19 Jun 2008 22:27:53 +0100
+
+nodebox-web (1.9.2-1) unstable; urgency=low
+
+  * Initial release (Closes: #473039)
+
+ -- Serafeim Zanikolas <serzan at hellug.gr>  Thu, 09 Jun 2008 23:10:49 +0100
+

Added: packages/nodebox-web/trunk/debian/compat
===================================================================
--- packages/nodebox-web/trunk/debian/compat	                        (rev 0)
+++ packages/nodebox-web/trunk/debian/compat	2009-02-21 20:05:00 UTC (rev 7687)
@@ -0,0 +1 @@
+5

Added: packages/nodebox-web/trunk/debian/control
===================================================================
--- packages/nodebox-web/trunk/debian/control	                        (rev 0)
+++ packages/nodebox-web/trunk/debian/control	2009-02-21 20:05:00 UTC (rev 7687)
@@ -0,0 +1,26 @@
+Source: nodebox-web
+Section: python
+Priority: optional
+Maintainer: Serafeim Zanikolas <serzan at hellug.gr>
+Build-Depends: debhelper (>= 5), python-support (>= 0.6), quilt
+Standards-Version: 3.8.0
+Uploaders: Python Modules Packaging Team <python-modules-team at lists.alioth.debian.org>
+Vcs-Svn: svn://svn.debian.org/python-modules/packages/nodebox-web/trunk
+Vcs-Browser: http://svn.debian.org/wsvn/python-modules/packages/nodebox-web
+Homepage: http://nodebox.net/code/index.php/Web
+
+Package: python-nodebox-web
+Architecture: all
+Depends: ${python:Depends}, python-beautifulsoup, python-soappy, python-feedparser, python-simplejson
+Provides: ${python:Provides}
+Description: collection of web-related Python modules
+ Nodebox Web is a collection of Python modules to get content from the web.
+ One can query Yahoo! and Google for links, images, news and spelling
+ suggestions, read RSS and Atom newsfeeds, retrieve articles from Wikipedia,
+ collect quality images from morgueFile or Flickr, browse through HTML
+ documents, clean up HTML, validate URLs, create GIF images from math equations
+ using mimeTeX, and get ironic word definitions from Urban Dictionary.
+ .
+ The library uses a caching mechanism that stores things you download from the
+ web, so they can be retrieved faster the next time. Many of the services also
+ work asynchronously.

Added: packages/nodebox-web/trunk/debian/copyright
===================================================================
--- packages/nodebox-web/trunk/debian/copyright	                        (rev 0)
+++ packages/nodebox-web/trunk/debian/copyright	2009-02-21 20:05:00 UTC (rev 7687)
@@ -0,0 +1,103 @@
+This package was debianized by Serafeim Zanikolas <serzan at hellug.gr> on
+Sat, 05 Apr 2008 14:46:34 +0100.
+
+It was downloaded from http://nodebox.net/code/index.php/Web
+
+Upstream Authors: Tom De Smedt <tomdesmedt at organisms.be>,
+                  Frederik De Bleser <frederik at pandora.be>
+
+Copyright: Copyright © 2008 Tom De Smedt, Frederik De Bleser
+
+yahoo.py: Copyright © 2007 by Frederik De Bleser, Tom De Smedt.
+The rest of the files in the nodebox-web distribution are Copyright (c) 2007
+Tom De Smedt.
+
+License:
+
+    This package is free software; you can redistribute it and/or modify
+    it under the terms of the GNU General Public License as published by
+    the Free Software Foundation; either version 2 of the License, or
+    (at your option) any later version.
+ 
+    This package is distributed in the hope that it will be useful,
+    but WITHOUT ANY WARRANTY; without even the implied warranty of
+    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+    GNU General Public License for more details.
+ 
+    You should have received a copy of the GNU General Public License
+    along with this package; if not, write to the Free Software
+    Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+
+On Debian systems, the complete text of the GNU General
+Public License can be found in `/usr/share/common-licenses/GPL-2'.
+
+The Debian packaging is © 2008, Serafeim Zanikolas <serzan at hellug.gr> and
+is licensed under the GPL, see above.
+
+
+The upstream package embeds the following programs (only included in the
+source package):
+
+BeautifulSoup.py is licensed under the PSF
+Copyright (c) 2004-2006 Leonard Richardson <leonardr at segfault.org>
+
+json.py is licensed under the GPL 2.1 or later.
+Copyright © 2005 Patrick D. Logan <patrickdlogan at stardecisions.com>
+
+feedparser.py, feedparsertest.py, feedparser/tests/* are licensed as follows
+
+Copyright (c) 2002-2005, Mark Pilgrim
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without modification,
+are permitted provided that the following conditions are met:
+
+* Redistributions of source code must retain the above copyright notice,
+  this list of conditions and the following disclaimer.
+* Redistributions in binary form must reproduce the above copyright notice,
+  this list of conditions and the following disclaimer in the documentation
+  and/or other materials provided with the distribution.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 'AS IS'
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+
+soap.py: Copyright © by Cayce Ullman <cayce at actzero.com>
+                          Brian Matthews <blm at actzero.com>
+
+Copyright (SOAPpy):
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are
+met:
+
+Redistributions of source code must retain the above copyright notice,
+this list of conditions and the following disclaimer. Redistributions
+in binary form must reproduce the above copyright notice, this list of
+conditions and the following disclaimer in the documentation and/or
+other materials provided with the distribution.
+
+Neither the name of actzero, inc. nor the names of its contributors
+may be used to endorse or promote products derived from this software
+without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR
+CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
+LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
+NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Added: packages/nodebox-web/trunk/debian/dirs
===================================================================
--- packages/nodebox-web/trunk/debian/dirs	                        (rev 0)
+++ packages/nodebox-web/trunk/debian/dirs	2009-02-21 20:05:00 UTC (rev 7687)
@@ -0,0 +1,2 @@
+usr/share/python-support/python-nodebox-web/nodebox_web/web
+usr/share/doc/python-nodebox-web/examples

Added: packages/nodebox-web/trunk/debian/examples
===================================================================
--- packages/nodebox-web/trunk/debian/examples	                        (rev 0)
+++ packages/nodebox-web/trunk/debian/examples	2009-02-21 20:05:00 UTC (rev 7687)
@@ -0,0 +1,8 @@
+standalone-examples/_web_example1.py
+standalone-examples/_web_example2.py
+standalone-examples/_web_example3.py
+standalone-examples/_web_example4.py
+standalone-examples/_web_example5.py
+standalone-examples/_web_example6.py
+standalone-examples/_web_example8.py
+standalone-examples/_web_example9.py

Added: packages/nodebox-web/trunk/debian/install
===================================================================
--- packages/nodebox-web/trunk/debian/install	                        (rev 0)
+++ packages/nodebox-web/trunk/debian/install	2009-02-21 20:05:00 UTC (rev 7687)
@@ -0,0 +1,14 @@
+debian/tmp/__init__.py usr/share/python-support/python-nodebox-web/nodebox_web/
+__init__.py usr/share/python-support/python-nodebox-web/nodebox_web/web/
+cache.py usr/share/python-support/python-nodebox-web/nodebox_web/web/
+html.py usr/share/python-support/python-nodebox-web/nodebox_web/web/
+url.py usr/share/python-support/python-nodebox-web/nodebox_web/web/
+page.py usr/share/python-support/python-nodebox-web/nodebox_web/web/
+mimetex.py usr/share/python-support/python-nodebox-web/nodebox_web/web/
+delicious.py usr/share/python-support/python-nodebox-web/nodebox_web/web/
+newsfeed.py usr/share/python-support/python-nodebox-web/nodebox_web/web/
+flickr.py usr/share/python-support/python-nodebox-web/nodebox_web/web/
+morguefile.py usr/share/python-support/python-nodebox-web/nodebox_web/web/
+urbandictionary.py usr/share/python-support/python-nodebox-web/nodebox_web/web/
+wikipedia.py usr/share/python-support/python-nodebox-web/nodebox_web/web/
+yahoo.py usr/share/python-support/python-nodebox-web/nodebox_web/web/

Added: packages/nodebox-web/trunk/debian/patches/adjust_imports.diff
===================================================================
--- packages/nodebox-web/trunk/debian/patches/adjust_imports.diff	                        (rev 0)
+++ packages/nodebox-web/trunk/debian/patches/adjust_imports.diff	2009-02-21 20:05:00 UTC (rev 7687)
@@ -0,0 +1,146 @@
+Index: nodebox-web/colr.py
+===================================================================
+--- nodebox-web.orig/colr.py	2009-02-20 23:54:55.000000000 +0000
++++ nodebox-web/colr.py	2009-02-20 23:54:59.000000000 +0000
+@@ -1,6 +1,6 @@
+-from url import URLAccumulator
++from nodebox_web.web.url import URLAccumulator
+ from urllib import quote
+-from cache import Cache
++from nodebox_web.web.cache import Cache
+ import simplejson
+ 
+ def clear_cache():
+@@ -65,7 +65,7 @@
+ 
+     def draw(self, x, y, w=40, h=40):
+         
+-        try: from web import _ctx
++        try: from nodebox_web.web.web import _ctx
+         except: pass
+         
+         from nodebox.graphics import RGB
+@@ -183,4 +183,4 @@
+ #size(500, 650)
+ #themes = search("office")
+ #theme = themes[0]
+-#preview(theme)
+\ No newline at end of file
++#preview(theme)
+Index: nodebox-web/flickr.py
+===================================================================
+--- nodebox-web.orig/flickr.py	2009-02-20 23:54:55.000000000 +0000
++++ nodebox-web/flickr.py	2009-02-20 23:54:59.000000000 +0000
+@@ -1,8 +1,8 @@
+ from urllib import quote_plus
+-from url import URLAccumulator
++from nodebox_web.web.url import URLAccumulator
+ from xml.dom.minidom import parseString
+ import os
+-from cache import Cache
++from nodebox_web.web.cache import Cache
+ 
+ API_KEY = "787081027f43b0412ba41142d4540480"
+ 
+Index: nodebox-web/mimetex.py
+===================================================================
+--- nodebox-web.orig/mimetex.py	2009-02-20 23:54:55.000000000 +0000
++++ nodebox-web/mimetex.py	2009-02-20 23:54:59.000000000 +0000
+@@ -5,9 +5,9 @@
+ # Copyright (c) 2007 by Tom De Smedt.
+ # See LICENSE.txt for details.
+ 
+-from url import URLAccumulator
++from nodebox_web.web.url import URLAccumulator
+ from urllib import quote
+-from cache import Cache
++from nodebox_web.web.cache import Cache
+ 
+ def clear_cache():
+     Cache("mimetex").clear()
+@@ -33,4 +33,4 @@
+     return mimeTeX(eq).image
+ 
+ #eq = "E = hf = \frac{hc}{\lambda} \,\! "
+-#image(gif(eq), 10, 10)
+\ No newline at end of file
++#image(gif(eq), 10, 10)
+Index: nodebox-web/morguefile.py
+===================================================================
+--- nodebox-web.orig/morguefile.py	2009-02-20 23:54:55.000000000 +0000
++++ nodebox-web/morguefile.py	2009-02-20 23:54:59.000000000 +0000
+@@ -9,8 +9,8 @@
+ from urllib import quote_plus
+ from xml.dom.minidom import parseString
+ 
+-from url import URLAccumulator
+-from cache import Cache
++from nodebox_web.web.url import URLAccumulator
++from nodebox_web.web.cache import Cache
+ 
+ def clear_cache():
+     Cache("morguefile").clear()
+@@ -159,4 +159,4 @@
+ 
+ #img = images[0]
+ #img.download()
+-#image(img.path, 0, 0)
+\ No newline at end of file
++#image(img.path, 0, 0)
+Index: nodebox-web/__init__.py
+===================================================================
+--- nodebox-web.orig/__init__.py	2009-02-20 23:54:55.000000000 +0000
++++ nodebox-web/__init__.py	2009-02-20 23:54:59.000000000 +0000
+@@ -34,7 +34,6 @@
+ import html
+ import page
+ import simplejson
+-import json # wrapper for simplejson, backward compatibility.
+ 
+ packages = [
+     "yahoo", "google", 
+@@ -97,4 +96,4 @@
+ # url.parse() has a new .filename attribute (equals .page).
+ # Handy web.save() command downloads data and saves it to a given path.
+ # hex_to_rgb() improvement for hex strings shorter than 6 characters.
+-# Upgraded to BeautifulSoup 3.0.7a
+\ No newline at end of file
++# Upgraded to BeautifulSoup 3.0.7a
+Index: nodebox-web/newsfeed.py
+===================================================================
+--- nodebox-web.orig/newsfeed.py	2009-02-20 23:54:55.000000000 +0000
++++ nodebox-web/newsfeed.py	2009-02-20 23:55:13.000000000 +0000
+@@ -8,7 +8,7 @@
+ 
+ import os
+ 
+-from feedparser import feedparser
++import feedparser
+ 
+ from url import URLAccumulator
+ from html import strip_tags
+@@ -159,4 +148,4 @@
+     print "Author:", item.author
+     print ">>", item.author_detail.name
+     print ">>", item.author_detail.email
+-"""
+\ No newline at end of file
++"""
+Index: nodebox-web/urbandictionary.py
+===================================================================
+--- nodebox-web.orig/urbandictionary.py	2009-02-20 23:54:55.000000000 +0000
++++ nodebox-web/urbandictionary.py	2009-02-20 23:54:59.000000000 +0000
+@@ -1,5 +1,5 @@
+ import url
+-import soap
++import SOAPpy as soap
+ import re
+ from cache import Cache
+ import pickle
+@@ -74,4 +74,4 @@
+                 self.append(ubd)
+             
+ def search(q, cached=True):
+-    return UrbanDictionary(q, cached)
+\ No newline at end of file
++    return UrbanDictionary(q, cached)

Added: packages/nodebox-web/trunk/debian/patches/remove-hardwired-newsfeed.diff
===================================================================
--- packages/nodebox-web/trunk/debian/patches/remove-hardwired-newsfeed.diff	                        (rev 0)
+++ packages/nodebox-web/trunk/debian/patches/remove-hardwired-newsfeed.diff	2009-02-21 20:05:00 UTC (rev 7687)
@@ -0,0 +1,29 @@
+Index: nodebox-web/newsfeed.py
+===================================================================
+--- nodebox-web.orig/newsfeed.py	2009-02-20 23:57:36.000000000 +0000
++++ nodebox-web/newsfeed.py	2009-02-20 23:58:01.000000000 +0000
+@@ -19,15 +19,6 @@
+ 
+ ### FAVORITE NEWSFEED ################################################################################
+ 
+-favorites = {}
+-try:
+-    path = os.path.join(os.path.dirname(__file__), "newsfeed.txt")
+-    for f in open(path).readlines():
+-        f = f.split(",")
+-        favorites[f[0].strip()] = f[1].strip()
+-except:
+-    pass
+-
+ def favorite_url(name):
+ 
+     if favorites.has_key(name):
+@@ -39,8 +30,6 @@
+ 
+     return None
+ 
+-favorite = favorite_url
+-
+ ### NEWSFEED #########################################################################################
+ 
+ class Newsfeed:

Added: packages/nodebox-web/trunk/debian/patches/series
===================================================================
--- packages/nodebox-web/trunk/debian/patches/series	                        (rev 0)
+++ packages/nodebox-web/trunk/debian/patches/series	2009-02-21 20:05:00 UTC (rev 7687)
@@ -0,0 +1,3 @@
+adjust_imports.diff
+remove-hardwired-newsfeed.diff
+standalone_examples.diff

Added: packages/nodebox-web/trunk/debian/patches/standalone_examples.diff
===================================================================
--- packages/nodebox-web/trunk/debian/patches/standalone_examples.diff	                        (rev 0)
+++ packages/nodebox-web/trunk/debian/patches/standalone_examples.diff	2009-02-21 20:05:00 UTC (rev 7687)
@@ -0,0 +1,206 @@
+diff -Nur nodebox-web-1.9.2/standalone-examples/_web_example1.py blah/standalone-examples/_web_example1.py
+--- nodebox-web-1.9.2/standalone-examples/_web_example1.py	1970-01-01 01:00:00.000000000 +0100
++++ blah/standalone-examples/_web_example1.py	2008-04-05 23:12:08.000000000 +0100
+@@ -0,0 +1,20 @@
++# Working with URL's.
++
++from nodebox_web import web
++
++# Is this a valid URL?
++print web.is_url("http://nodebox.net")
++
++# Does the page exist?
++print web.url.not_found("http://nodebox.net/nothing")
++
++# Split the URL into different components.
++url = web.url.parse("http://nodebox.net/code/index.php/Home")
++print "domain:", url.domain
++print "page:", url.page
++
++# Retrieve data from the web.
++url = "http://nodebox.net/code/data/media/header.jpg"
++print web.url.is_image(url)
++img = web.url.retrieve(url)
++print "download errors:", img.error
+diff -Nur nodebox-web-1.9.2/standalone-examples/_web_example2.py blah/standalone-examples/_web_example2.py
+--- nodebox-web-1.9.2/standalone-examples/_web_example2.py	1970-01-01 01:00:00.000000000 +0100
++++ blah/standalone-examples/_web_example2.py	2008-04-05 23:13:08.000000000 +0100
+@@ -0,0 +1,24 @@
++# Parsing web pages.
++
++from nodebox_web import web
++
++url = "http://nodebox.net"
++print web.url.is_webpage(url)
++
++# Retrieve the data from the web page and put it in an easy object.
++html = web.page.parse(url)
++
++# The actual URL you are redirected to.
++# This will be None when the page is retrieved from cache.
++print html.redirect
++
++# Get the web page title.
++print html.title
++
++# Get all the links, including internal links in the same site.
++print html.links(external=False)
++
++# Browse through the HTML tree, find <div id="content">,
++# strip tags from it and print out the contents.
++content = html.find(id="content")
++web.html.plain(content)
+diff -Nur nodebox-web-1.9.2/standalone-examples/_web_example3.py blah/standalone-examples/_web_example3.py
+--- nodebox-web-1.9.2/standalone-examples/_web_example3.py	1970-01-01 01:00:00.000000000 +0100
++++ blah/standalone-examples/_web_example3.py	2008-04-05 23:14:54.000000000 +0100
+@@ -0,0 +1,24 @@
++# Querying Yahoo!
++
++from nodebox_web import web
++from nodebox_web.web import yahoo
++
++# Get a list of links for a search query.
++links = yahoo.search_images("food")
++print links
++
++# Retrieve a random image.
++img = web.url.retrieve(links)
++
++# We can't always trust the validity of data from the web,
++# the site may be down, the image removed, etc.
++# If you're going to do a lot of batch operations and
++# you don't want the script to halt on an error,
++# put your code inside a try/except statement.
++try:
++    data=img.data
++except:
++    print str(img.error)
++    
++# An easier command is web.download():
++img = web.download(links)
+diff -Nur nodebox-web-1.9.2/standalone-examples/_web_example4.py blah/standalone-examples/_web_example4.py
+--- nodebox-web-1.9.2/standalone-examples/_web_example4.py	1970-01-01 01:00:00.000000000 +0100
++++ blah/standalone-examples/_web_example4.py	2008-04-05 23:16:00.000000000 +0100
+@@ -0,0 +1,22 @@
++# Reading newsfeeds.
++
++from nodebox_web import web
++from nodebox_web.web import newsfeed
++
++url = "http://rss.slashdot.org/Slashdot/slashdot"
++
++# Parse the newsfeed data into a handy object.
++feed = newsfeed.parse(url)
++
++# Get the title and the description of the feed.
++print feed.title, "|", feed.description
++
++for item in feed.items:
++    print "-" * 40
++    print "- Title       :", item.title
++    print "- Link        :", item.link
++    print "- Description :", web.html.plain(item.description)
++    print "- Date        :", item.date
++    print "- Author      :", item.author
++
++print item.description
+diff -Nur nodebox-web-1.9.2/standalone-examples/_web_example5.py blah/standalone-examples/_web_example5.py
+--- nodebox-web-1.9.2/standalone-examples/_web_example5.py	1970-01-01 01:00:00.000000000 +0100
++++ blah/standalone-examples/_web_example5.py	2008-04-05 23:17:15.000000000 +0100
+@@ -0,0 +1,25 @@
++# Wikipedia articles.
++
++from nodebox_web import web
++from nodebox_web.web import wikipedia
++
++q = "Finland"
++article = wikipedia.search(q, language="nl")
++
++# Print the article title.
++print article.title
++
++# Get a list of all the links to other articles.
++# We can supply these to a new search.
++print article.links
++
++# The title of each paragraph
++for p in article.paragraphs: 
++    print p.title
++    #print "-"*40
++    #print p
++
++print article.paragraphs[0]
++
++print
++print article.references[0]
+diff -Nur nodebox-web-1.9.2/standalone-examples/_web_example6.py blah/standalone-examples/_web_example6.py
+--- nodebox-web-1.9.2/standalone-examples/_web_example6.py	1970-01-01 01:00:00.000000000 +0100
++++ blah/standalone-examples/_web_example6.py	2008-04-05 23:28:50.000000000 +0100
+@@ -0,0 +1,16 @@
++# Retrieve images from MorgueFile.
++
++from nodebox_web import web
++from nodebox_web.web import morguefile
++
++q = "cloud"
++img = morguefile.search(q)[0]
++
++print img
++
++# A morgueFile image in the list has 
++# a number of methods and properties.
++# The download() method caches the image locally 
++# and returns the path to the file.
++img = img.download()
++
+diff -Nur nodebox-web-1.9.2/standalone-examples/_web_example7.py blah/standalone-examples/_web_example7.py
+--- nodebox-web-1.9.2/standalone-examples/_web_example7.py	1970-01-01 01:00:00.000000000 +0100
++++ blah/standalone-examples/_web_example7.py	2008-04-06 13:56:13.000000000 +0100
+@@ -0,0 +1,18 @@
++# Color themes from Kuler.
++
++from nodebox_web import web
++from nodebox_web.web import kuler
++
++# Get the current most popular themes.
++themes = kuler.search_by_popularity()
++
++# the code below assumes the availability of methods that are defined in other
++# parts of the nodebox library
++#
++# Display colors from the first theme.
++#for i in range(100):
++#    for r, g, b in themes[0]:
++#        fill(r, g, b, 0.8)
++#        rotate(random(360))
++#        s = random(50) + 10
++#        oval(random(300), random(300), s, s)
+diff -Nur nodebox-web-1.9.2/standalone-examples/_web_example8.py blah/standalone-examples/_web_example8.py
+--- nodebox-web-1.9.2/standalone-examples/_web_example8.py	1970-01-01 01:00:00.000000000 +0100
++++ blah/standalone-examples/_web_example8.py	2008-04-05 23:37:40.000000000 +0100
+@@ -0,0 +1,8 @@
++# Definitions from the Urban Dictionary.
++
++from nodebox_web import web
++from nodebox_web.web import urbandictionary
++
++definitions = urbandictionary.search("human")
++for i, d in enumerate(definitions):
++    print "%d: %s" % (i, d)
+diff -Nur nodebox-web-1.9.2/standalone-examples/_web_example9.py blah/standalone-examples/_web_example9.py
+--- nodebox-web-1.9.2/standalone-examples/_web_example9.py	1970-01-01 01:00:00.000000000 +0100
++++ blah/standalone-examples/_web_example9.py	2008-04-05 23:38:11.000000000 +0100
+@@ -0,0 +1,13 @@
++# Clearing the cache.
++
++from nodebox_web import web
++
++# Queries and images are cached locally for speed,
++# so it's a good idea to empty the cache now and then.
++# Also, when a query fails (internet is down etc.),
++# this "bad" query is also cached.
++# Then you may want to clear the cache of the specific
++# portion of the library you're working with,
++# for example: morguefile.clear_cache()
++
++web.clear_cache()

Added: packages/nodebox-web/trunk/debian/pyversions
===================================================================
--- packages/nodebox-web/trunk/debian/pyversions	                        (rev 0)
+++ packages/nodebox-web/trunk/debian/pyversions	2009-02-21 20:05:00 UTC (rev 7687)
@@ -0,0 +1 @@
+2.3-

Added: packages/nodebox-web/trunk/debian/rules
===================================================================
--- packages/nodebox-web/trunk/debian/rules	                        (rev 0)
+++ packages/nodebox-web/trunk/debian/rules	2009-02-21 20:05:00 UTC (rev 7687)
@@ -0,0 +1,46 @@
+#!/usr/bin/make -f
+
+# Uncomment this to turn on verbose mode.
+#export DH_VERBOSE=1
+
+include /usr/share/quilt/quilt.make
+
+clean: unpatch
+	dh_testdir
+	dh_testroot
+	dh_clean
+	rm -rf debian/tmp
+
+build: build-stamp
+build-stamp: patch
+install: build
+	dh_testdir
+	dh_testroot
+	dh_clean -k
+	dh_installdirs
+
+# Build architecture-independent files here.
+binary-indep: build install
+	dh_testdir
+	dh_testroot
+	dh_link
+	dh_installchangelogs
+	mkdir debian/tmp/
+	touch debian/tmp/__init__.py
+	dh_install --list-missing
+	dh_installdocs
+	dh_installexamples
+	dh_pysupport
+	dh_fixperms
+	dh_compress
+	dh_installdeb
+	dh_gencontrol
+	dh_md5sums
+	dh_builddeb
+
+# Build architecture-dependent files here.
+binary-arch: build install
+# nothing for a python package
+
+binary: binary-indep binary-arch
+.PHONY: build build-stamp clean binary-indep binary-arch install


Property changes on: packages/nodebox-web/trunk/debian/rules
___________________________________________________________________
Name: svn:executable
   + *




More information about the Python-modules-commits mailing list