I am newbie mastering test driven development. I can't clarify myself
which mock library to use.
There are number of them and which one do you prefer?
Two libraries that attracted my attention are:
As for me the latest one, dingus, is the easiest (see this screencast: http://vimeo.com/3949077 ), but it has very few downloads from pypi,
so it scares me a little.
Minimock has wider usage and community, but I have some troubles using
it. Maybe I am wrong, but with minimock you always have to keep track
the order of imports in your test modules. Well, may be I just don't
understand fully how minimock works.
What are your suggestions?
Thanks for your reply! Isn't what you are talking about integration
tests? And unit tests should be fully isolated? So even for method
'some_method()' of class A I should mock instance of class A (i.e. to
mock 'self') to test 'some_method()'.
Please, could you explain in more detail your thoughts:
And could you give an example.
For me it's really hard to develop test first. Often I don't know what
tests to write to replace hardcoded return values by objects that
perform actual work.
I have read several books on TDD and explored http://c2.com/cgi/wiki?TestDrivenDevelopment
and related wikis, but often it seems I don't have enough
understanding to write even simple application.
And sorry for my English.
"Unit test" is a high-end QA concept. Developers can get the best
return on "developer tests". They don't bother with aerospace-quality
isolation between units.
If a TDD test needs to pull in a bunch of modules to pass, that's
generally a good thing, because they all get indirect testing. If they
catch a bug, their local tests might not catch it, but the higher
level tests still have a chance.
(And if your product still needs unit tests, TDD will make them very
easy for a formal QA team to add.)
However, expensive setup is a design smell. That means if a test case
requires too many lines of code for its Assemble phase (before its
Activate and Assert phases), then maybe those lines of code support
objects that are too coupled, and they need a better design.
Throwing mocks at these objects, instead of decoupling them, will
"perfume" the design smell, instead of curing it.
frob = Frob()
frob.knob = Mock()
frob.knob.value = Mock(return_value = 42)
assert 42 == frob.method_using_knob()
We need the mock because we can't control how Frob's constructor built
its knob. So instead, give Frob the option to construct with a Knob:
knob = Knob(42)
frob = Frob(knob)
Note that in production the Knob constructor never takes a Knob. Maybe
we should upgrade the production code too (!), or maybe Knob's
constructor should only create a knob if it didn't get passed one.
Either technique is acceptable, because the resulting code decouples
Frobs and Knobs just a little bit more.
You have read too many books on TDD. C-:
Alternate between writing lines of test and lines of code. Run the
tests after the fewest possible edits, and always correctly predict if
the tests will pass, or will fail, and with what diagnostic. (And
configure your editor to run the stankin tests, no matter how hard it
fights you!) The high-end tricks will get easier after you get the
basic cycle down.
See these two topics: http://groups.google.com/group/minimock-dev/browse_thread/th[..] http://groups.google.com/group/minimock-dev/browse_thread/th[..]
There are special cases, which you have to be aware of, if you use
Thanks for your exhaustive answer.
Actually, I'll investigate your example with 'frob'. From just reading
the example it's not clear for me what I will benefit, using this
And I have already refused to write totally isolated tests, because it
looks like a great waste of time.
I can't get a good hit for "construction encapsulation" in Google.
(Although I got some good bad ones!)
This paper _almost_ gets the idea: http://www.netobjectives.com/download/Code Qualities and Practices.pdf
Do you run your tests after the fewest possible edits? Such as 1-3
lines of code?
I'm not sure why the TDD books don't hammer that point down...
That is runtime test isolation. It's not the same thing as "unit test
isolation". Just take care in your tearDown() to scrub your
Google "Mock abuse" from here...
I run my tests all the time (they almost replaced debugger in my IDE).
But there are times, when I can't just run tests after 1-3 lines of
For example, I am developing an application that talks to some web
service. One of methods of a class, which implements API for a given
web service, should parse xml response from web service. At first, I
hardcoded returned values, so that they looked like already parsed.
But further, additional tests forced me to actually operate with
sample xml data instead of hardcoded values. So I created sample xml
file that resembled response from server. And after that I can't just
write 1-3 lines between each test. Because I need to read() the file
and sort it out in a loop (at least 6-9 lines of code for small xml
file). And only after this procedure I run my tests with the hope that
they all pass.
Maybe it's not proper TDD, but I can't figure out how to reduce period
between running tests in a case above.
Right, isolation is essential. But I can't decide to which extent I
should propagate isolation.
For example, in "Python Testing: Beginner's Guide" by Daniel Arbuckle,
author suggests that if you do unittesting you should isolate the
smallest units of code from each other. For example, if you have a
return self.method1 10
According to the book, if you want to test method2, you should isolate
it from method1 and class instance('self').
Other books are not so strict...
And what should I follow as newbie?
Currently, I don't create mocks of units if they are within the same
class with the unit under test. If that is not right approach, please,
explain what are best practices... I am just learning TDD..
You are still being too literal. The "1-3 lines of code" guideline is
a guideline, not a rule. It means 1 small edit is best, 2 edits are
mostly harmless, 3 is okay, 4 is acceptable, and so on. It's the peak
of the Zipf's Law curve:
You "mocked the wire" with that hardcoded XML so that your subsequent
edits can be very short and reliable. Props!
I believe that Ben is perfectly correct, and that you are talking at
cross purposes because you've missed the significance of his
(you later realise)
within his post.
Runtime test isolation doesn't enter into from what I can see.
Can you please clarify the situation one way or the other.
You used âÄúpropagateâÄĚ in a sense I don't understand there.
I'm not sure what the author means, but I would say that as it stands
that advice is independent of what testing is being done. In all cases:
* Make your code units small, so each one is not doing much and is easy
* Make the interface of units at each level as narrow as feasible, so
they're not brittle in the face of changes to the implementation.
I don't really know what that means.
Remember that each test case should not be âÄútest method1âÄĚ. That is far
too broad, and in some cases too narrow. There is no one-to-one mapping
between methods and unit test cases.
Instead, each test case should test one true-or-false assertion about
the behaviour of the code. âÄúWhen we start with this initial state (the
test fixture), and perform this operation, the resulting state is thatâÄĚ.
It makes a lot of sense to name the test case so the assertion being
made *is* its name: not âÄėtest frobnicateâÄô with dozens of assertions, but
one âÄėtest_frobnicate_with_valid_spangulator_returns_trueâÄô which makes
that assertion, and extra ones for each distinct assertion.
The failure of a unit test case should indicate *exactly* what has gone
wrong. If you want to make multiple assertions about a code unit, write
multiple test cases for that unit and name the tests accordingly.
This incidentally requires that you test something small enough that
such a true-or-false assertion is meaningful, which leads to
well-designed code with small easily-tested code units. But that's an
emergent property, not a natural law.
In the fixture of the unit test case, create whatever test doubles are
necessary to put your code into the initial state you need for the test
case; then tear all those down whatever the result of the test case.
If you need to create great honking wads of fixtures for any test case,
that is a code smell: your code units are too tightly coupled to
persistent state, and need to be decoupled with narrow interfaces.
The Python âÄėunittestâÄô module makes this easier by letting you define
fixtures common to many test cases (the âÄėsetUpâÄô and âÄėtearDownâÄô
interface). My rule of thumb is: if I need to make different fixtures
for some set of test cases, I write a new test case class for those
\ âÄúFollowing fashion and the status quo is easy. Thinking about |
`\ your users' lives and creating something practical is much |
_o__) harder.âÄĚ âÄĒRyan Singer, 2008-07-09 |
Sorry for too late reply!!!
Thank you very much for sharing your experience! I still have to grasp
a lot in TDD.
Pretty much any test assumes that basic things other than the tested
object work correctly. For instance, any test of method2 will assume
that '+' works correctly. The dependency graph between methods in a
class will nearly always be acyclic. So I would start with the 'leaf'
methods and work up. In the above case, test method1 first and then
method2. The dependence of the test of method2 on the correctness of
method1 is hardly worse, to me, then its dependence on the correctness
of int.__add__. It is just the the responsibility for the latter falls
on the developers, and *their* suite of tests.
Whenever any code test fails, there are two possibilities. The code
itself is buggy, or something it depends on is buggy. I see two reasons
for isolation and mock units: test resource saving (especially time) and
independent development. If you are developing ClassA and someone else
is developing ClassB, you might want to test ClassA even though it
depends on ClassB and classB is not ready yet. This consideration is
much less likely to apply to method2 versus method1 of a coherent class.
My current opinions.
Terry Jan Reedy