Automated testing: Difference between revisions
Content added Content deleted
imported>Blues chick (Replaced content with "{{Hacking OpenHatch}} This page is now included in our project package, and is automatically generated by sphinx at openhatch.readthedocs.org: [http://openhatch.readthedo...") |
|||
(14 intermediate revisions by 11 users not shown) | |||
Line 1: | Line 1: | ||
{{Hacking OpenHatch}} |
{{Hacking OpenHatch}} |
||
This page is now included in our project package, and is automatically generated by sphinx at openhatch.readthedocs.org: [http://openhatch.readthedocs.org/en/latest/advanced/advanced_testing.html Advanced Testing] |
|||
The '''purpose of this page''' is to show you how to write automated tests within the OpenHatch codebase. |
|||
If you already know how software testing works, skip to '''Details specific to OpenHatch'''. |
|||
== Tests: An overview == |
|||
When you run: |
|||
python manage.py test |
|||
and you'll see a bunch of dots. Dots mean success. |
|||
This runs the many tests that are part of the OpenHatch code. |
|||
In general, you really should write a test if you add new functionality. This page explains how and when to write new tests and how to run the tests we have. |
|||
=== What a basic test looks like === |
|||
Imagine this is in mysite/base/views.py: |
|||
def multiply(x, y): |
|||
return x * y |
|||
Then this would be in mysite/base/tests.py: |
|||
import mysite.base.views |
|||
class TestMultiplication(django.test.TestCase): |
|||
def test_return_one(self): |
|||
# Make a dictionary that should return 1 |
|||
self.assertEqual(35, |
|||
mysite.base.views.multiply(7, 5)) |
|||
=== When a test fails === |
|||
When a test fails you will see "FAILED" followed by the |
|||
test_name, along with the Traceback and the failure summary |
|||
at the end (e.g. FAILED (failures=2, errors=1, skipped=9)) |
|||
To force a failure, maybe you are just curious to see what it will |
|||
look like, you can add: <code> self.assertTrue(False) </code> |
|||
to a test case that you are interested in running. |
|||
== Getting your local dev OpenHatch set up to run tests == |
|||
To run tests correctly you'll need to have subversion installed - |
|||
$ apt-get install subversion |
|||
Then run the full suite of tests -- |
|||
$ python manage.py test |
|||
Whoosh的分词是基于正则表达式的,所以只需要写出合适的正则表达式就可以正确分词。下面是一些例子,可能有不完善的地方,需要继续完善完善。#测试分词#!/usr/bin/env python# -*- coidng: UTF-8 -*-from whoosh.analysis import RegexAnalyzerrex = RegexAnalyzer(ur”([\u4e00-\u9fa5])|(\w+(\.?\w+)*)”)print [token.text for token in rex(u"hi 中 000 中文测试中文 there 3.141 big-time under_score")]#一个完整的演示#!/usr/bin/env python# -*- coidng: UTF-8 -*-from whoosh.index import create_infrom whoosh.fields import *from whoosh.analysis import RegexAnalyzeranalyzer = RegexAnalyzer(ur”([\u4e00-\u9fa5])|(\w+(\.?\w+)*)”)schema = Schema(title=TEXT(stored=True), path=ID(stored=True), content=TEXT(stored=True, analyzer=analyzer))ix = create_in(“indexdir”, schema)writer = ix.writer()writer.add_document(title=u”First document”, path=u”/a”, content=u”This is the first document we've added!”)writer.add_document(title=u”Second document”, path=u”/b”, content=u”The second one 你 中文测试中文 is even more interesting!”)writer.commit()searcher = ix.searcher()results = searcher.find(“content”, u”first”)print results[0]results = searcher.find(“content”, u”你”)print results[0]results = searcher.find(“content”, u”测试”)print results[0] |
|||
== General testing tips == |
|||
=== How to write code that is easy to test === |
|||
If you are writing a function, make it '''accept arguments''' for its data, rather having it calculate the input itself. For example: |
|||
'''Good''' |
|||
def multiply(x, y): |
|||
return x * y |
|||
'''Less good''' |
|||
def multiply(x): |
|||
y = settings.MULTIPLICATION_FACTOR |
|||
return x * y |
|||
It's okay to rely on things like system settings and database content, but in general if your functions are simpler, they are easier to test. |
|||
== Details specific to OpenHatch == |
|||
=== We regularly run Automated Testing === |
|||
OpenHatch's Automated Testing is run by Jenkins, with the interface on the virtual machine donated by GPLHost @ http://vm3.openhatch.org:8080/ |
|||
=== Where to write your tests === |
|||
In general, add tests to the same Django ''app'' as you are editing. For example, if you made |
|||
changes to '''base/views.py''', then add a test in '''base/tests.py'''. |
|||
The test files are kind of ''sprawling''. It doesn't really matter where within the ''tests.py'' file |
|||
you add your test. I would suggest adding it to the end of the file. |
|||
=== The OpenHatch test case helper class === |
|||
In '''mysite/base/tests.py''' there is a TwillTests class. It offers the following convenience methods: |
|||
* '''login_with_client''' |
|||
* '''login_with_twill''' |
|||
=== About fixtures === |
|||
If you inherit from TwillTests, you get some data in your database. You can rely on it. |
|||
=== To run your tests === |
|||
What app did you write your test in? Let's pretend it was in '''base''': |
|||
python manage.py test base |
|||
=== To run just a few specific tests === |
|||
python manage.py test base.Feed base.Unsubscribe.test_unsubscribe_view |
|||
The structure here is '''app'''.'''class'''.'''method'''. So if you want to just run |
|||
your own new test, you can do it that way. |
|||
=== Mocking and patching === |
|||
This section is important, but we haven't written it yet. Oops. |
|||
=== Testing with Twill, versus the Django test client === |
|||
To make a long story short: |
|||
The Django test client is good at introspecting how the function worked internally. |
|||
Twill tests are good because they let you say "Click on the link called '''log in'''". |
|||
We should write more about this. Maybe you, dear reader, can say some more. |
Latest revision as of 15:30, 22 August 2014
This is a page about improving or modifying OpenHatch.
We call that "Hacking OpenHatch," and there is a whole category of pages about that.
This page is now included in our project package, and is automatically generated by sphinx at openhatch.readthedocs.org: Advanced Testing