Bug#820416: kodi: FTBFS in testing (Segmentation fault)

Balint Reczey balint at balintreczey.hu
Sun Apr 17 16:04:15 UTC 2016


Control: severity -1 normal
Control: tags -1 upstream confirmed

Hi Santiago,

2016-04-16 18:43 GMT+02:00 Santiago Vila <sanvila at unex.es>:
> I've put the program kodi-test here:
>
> https://people.debian.org/~sanvila/bug-820416/kodi-test.gz
>
> It was built on a stretch chroot today (for amd64).
>
> To reproduce the segfault, just install the build-dependencies for
> kodi, i.e. on a stretch chroot, please do:
>
> apt-get build-dep kodi
>
> then try to execute kodi-test:
>
> ./kodi-test
>
> This is all you need to reproduce the problem.

Thank you for the bug report and the additional information.

I have just created a chroot for reproducing the issue and even after I
set memory limits it did not crash.
I also built kodi-test myself which did not crash either.

root at stretch:/kodi-16.1~rc2+dfsg1# ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 30616
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) 2048000
open files                      (-n) 65536
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 30616
virtual memory          (kbytes, -v) 4096000
file locks                      (-x) unlimited

./kodi-test-dloaded
...
[----------] Global test environment tear-down
[==========] 574 tests from 89 test cases ran. (9800 ms total)
[  PASSED  ] 574 tests.
Clean shutdown of TestGlobalPattern1

Valgrind found a few issues in Google Test, but nothing extraordinary:
...
==21850== Use of uninitialised value of size 8
==21850==    at 0xFAF6201: _itoa_word (in /lib/x86_64-linux-gnu/libc-2.22.so)
==21850==    by 0xFAF998C: vfprintf (in /lib/x86_64-linux-gnu/libc-2.22.so)
==21850==    by 0xFB23258: vsnprintf (in /lib/x86_64-linux-gnu/libc-2.22.so)
==21850==    by 0xFB00D11: snprintf (in /lib/x86_64-linux-gnu/libc-2.22.so)
==21850==    by 0x130A48A: testing::(anonymous namespace)::PrintByteSegmentInObjectTo(unsigned char const*, unsigned long, unsigned long, std::ostream*) [clone .constprop.392] (gtest-printers
.cc:72)
==21850==    by 0x130E6EC: PrintBytesInObjectToImpl (gtest-printers.cc:90)
==21850==    by 0x130E6EC: testing::internal2::PrintBytesInObjectTo(unsigned char const*, unsigned long, std::ostream*) (gtest-printers.cc:112)
==21850==    by 0x134E4A2: PrintValue (gtest-printers.h:136)
==21850==    by 0x134E4A2: operator<< <char, std::char_traits<char>, SplitPath> (gtest-printers.h:201)
==21850==    by 0x134E4A2: DefaultPrintNonContainerTo<SplitPath> (gtest-printers.h:245)
==21850==    by 0x134E4A2: DefaultPrintTo<SplitPath> (gtest-printers.h:338)
==21850==    by 0x134E4A2: PrintTo<SplitPath> (gtest-printers.h:376)
==21850==    by 0x134E4A2: Print (gtest-printers.h:600)
==21850==    by 0x134E4A2: UniversalPrint<SplitPath> (gtest-printers.h:756)
==21850==    by 0x134E4A2: Print (gtest-printers.h:684)
==21850==    by 0x134E4A2: PrintToString<SplitPath> (gtest-printers.h:849)
==21850==    by 0x134E4A2: testing::internal::ParameterizedTestCaseInfo<TestNfs>::RegisterTests() (gtest-param-util.h:508)
==21850==    by 0x130BC44: RegisterTests (gtest-param-util.h:602)
==21850==    by 0x130BC44: testing::internal::UnitTestImpl::RegisterParameterizedTests() (gtest.cc:2290)
==21850==    by 0x1322ED8: testing::internal::UnitTestImpl::PostFlagParsingInit() (gtest.cc:4125)
==21850==    by 0x132C034: void testing::internal::InitGoogleTestImpl<char>(int*, char**) (gtest.cc:4991)
==21850==    by 0x8461B6: main (xbmc-test.cpp:40)
...

OTOH in a VirtualBox VM limited to 2GB of RAM i was able to
reproduce the crash and got a meaningful backtrace:
...
#0  __GI___pthread_mutex_lock (mutex=mutex at entry=0x140) at ../nptl/pthread_mutex_lock.c:68
        __PRETTY_FUNCTION__ = "__pthread_mutex_lock"
        type = <optimized out>
        id = <optimized out>
#1  0x00000000014b08b4 in XbmcThreads::pthreads::RecursiveMutex::lock (this=0x140) at /home/vagrant/kodi-16.1~rc2+dfsg1/xbmc/threads/platform/pthreads/CriticalSection.h:49
No locals.
#2  XbmcThreads::CountingLockable<XbmcThreads::pthreads::RecursiveMutex>::lock (this=0x140) at /home/vagrant/kodi-16.1~rc2+dfsg1/xbmc/threads/Lockables.h:60
No locals.
#3  XbmcThreads::UniqueLock<CCriticalSection>::UniqueLock (lockable=..., this=<synthetic pointer>) at /home/vagrant/kodi-16.1~rc2+dfsg1/xbmc/threads/Lockables.h:127
No locals.
#4  CSingleLock::CSingleLock (cs=..., this=<synthetic pointer>) at /home/vagrant/kodi-16.1~rc2+dfsg1/xbmc/threads/SingleLock.h:38
No locals.
#5  XbmcThreads::CEventGroup::Set (child=0x7ffc9df57090, this=0xe0) at Event.h:122
No locals.
#6  CEvent::Set (this=this at entry=0x7ffc9df57090) at Event.cpp:80
        iter =
        l = {<XbmcThreads::UniqueLock<CCriticalSection>> = {<XbmcThreads::NonCopyable> = {<No data fields>}, mutex = @0x7ffc9df57098, owns = true}, <No data fields>}
#7  0x00000000014b2d09 in CThread::staticThread (data=0x7ffc9df56fc0) at Thread.cpp:137
        pThread = 0x7ffc9df56fc0
        name = "AlarmClock"
        id = 140436486321920
        autodelete = false
        __FUNCTION__ = "staticThread"
        lock = {<XbmcThreads::UniqueLock<CCriticalSection>> = {<XbmcThreads::NonCopyable> = {<No data fields>}, mutex = @0x7ffc9df571f0, owns = true}, <No data fields>}
#8  0x00007fba1209f454 in start_thread (arg=0x7fb9eae17700) at pthread_create.c:334
        __res = <optimized out>
        pd = 0x7fb9eae17700
        now = <optimized out>
        unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140436486321920, -4855880568288393046, 0, 140722958592575, 3, 140722958592704, 4895315604668598442, 4893255871758911658},
              mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
        not_first_call = <optimized out>
        pagesize_m1 = <optimized out>
        sp = <optimized out>
        freesize = <optimized out>
        __PRETTY_FUNCTION__ = "start_thread"
#9  0x00007fba09568edd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:116
No locals.
#10 0x0000000000000000 in ?? ()
No symbol table info available.



It suggests there is something wrong with the mutex handling and TSAN seems to confirm that:
...
[ RUN      ] TestThreadLocal.Stack
==================
WARNING: ThreadSanitizer: data race (pid=32291)
  Atomic read of size 1 at 0x7d580000d5d8 by thread T92 (mutexes: write M1256501375458662192):
    #0 pthread_mutex_lock <null> (libtsan.so.0+0x000000034b76)
    #1 XbmcThreads::pthreads::RecursiveMutex::lock() /home/vagrant/kodi-16.1~rc2+dfsg1/xbmc/threads/platform/pthreads/CriticalSection.h:49 (kodi-test+0x0000014b087e)
    #2 XbmcThreads::CountingLockable<XbmcThreads::pthreads::RecursiveMutex>::lock() /home/vagrant/kodi-16.1~rc2+dfsg1/xbmc/threads/Lockables.h:60 (kodi-test+0x0000014b087e)
    #3 XbmcThreads::UniqueLock<CCriticalSection>::UniqueLock(CCriticalSection&) /home/vagrant/kodi-16.1~rc2+dfsg1/xbmc/threads/Lockables.h:127 (kodi-test+0x0000014b087e)
    #4 CSingleLock::CSingleLock(CCriticalSection&) /home/vagrant/kodi-16.1~rc2+dfsg1/xbmc/threads/SingleLock.h:38 (kodi-test+0x0000014b087e)
    #5 CEvent::Set() /home/vagrant/kodi-16.1~rc2+dfsg1/xbmc/threads/Event.cpp:75 (kodi-test+0x0000014b087e)

  Previous write of size 8 at 0x7d580000d5d8 by main thread:
    #0 operator new(unsigned long) <null> (libtsan.so.0+0x000000025723)
    #1 thread::thread(IRunnable&) /home/vagrant/kodi-16.1~rc2+dfsg1/xbmc/threads/test/TestHelpers.h:75 (kodi-test+0x0000014958be)
    #2 TestThreadLocal_Stack_Test::TestBody() /home/vagrant/kodi-16.1~rc2+dfsg1/xbmc/threads/test/TestThreadLocal.cpp:113 (kodi-test+0x0000014958be)

  Location is heap block of size 672 at 0x7d580000d500 allocated by main thread:
    #0 operator new(unsigned long) <null> (libtsan.so.0+0x000000025723)
    #1 thread::thread(IRunnable&) /home/vagrant/kodi-16.1~rc2+dfsg1/xbmc/threads/test/TestHelpers.h:75 (kodi-test+0x0000014958be)
    #2 TestThreadLocal_Stack_Test::TestBody() /home/vagrant/kodi-16.1~rc2+dfsg1/xbmc/threads/test/TestThreadLocal.cpp:113 (kodi-test+0x0000014958be)

  Mutex M1256501375458662192 is already destroyed.

  Thread T92 'DumbThread' (tid=32657, running) created by main thread at:
    #0 pthread_create <null> (libtsan.so.0+0x000000026e64)
    #1 CThread::SpawnThread(unsigned int) /home/vagrant/kodi-16.1~rc2+dfsg1/xbmc/threads/platform/pthreads/ThreadImpl.cpp:50 (kodi-test+0x0000014b1494)

SUMMARY: ThreadSanitizer: data race ??:0 __interceptor_pthread_mutex_lock
==================
[       OK ] TestThreadLocal.Stack (11 ms)
[ RUN      ] TestThreadLocal.Heap
==================
WARNING: ThreadSanitizer: data race (pid=32291)
  Atomic read of size 1 at 0x7d580000d730 by thread T92 (mutexes: write M1256501375458662192):
    #0 pthread_mutex_unlock <null> (libtsan.so.0+0x000000034c7b)
    #1 XbmcThreads::pthreads::RecursiveMutex::unlock() /home/vagrant/kodi-16.1~rc2+dfsg1/xbmc/threads/platform/pthreads/CriticalSection.h:51 (kodi-test+0x0000014b2d1f)
    #2 XbmcThreads::CountingLockable<XbmcThreads::pthreads::RecursiveMutex>::unlock() /home/vagrant/kodi-16.1~rc2+dfsg1/xbmc/threads/Lockables.h:62 (kodi-test+0x0000014b2d1f)
    #3 XbmcThreads::UniqueLock<CCriticalSection>::unlock() /home/vagrant/kodi-16.1~rc2+dfsg1/xbmc/threads/Lockables.h:138 (kodi-test+0x0000014b2d1f)
    #4 CSingleLock::Leave() /home/vagrant/kodi-16.1~rc2+dfsg1/xbmc/threads/SingleLock.h:41 (kodi-test+0x0000014b2d1f)
    #5 CThread::staticThread(void*) /home/vagrant/kodi-16.1~rc2+dfsg1/xbmc/threads/Thread.cpp:140 (kodi-test+0x0000014b2d1f)

  Previous write of size 8 at 0x7d580000d730 by main thread:
    #0 operator new(unsigned long) <null> (libtsan.so.0+0x000000025723)
    #1 thread::thread(IRunnable&) /home/vagrant/kodi-16.1~rc2+dfsg1/xbmc/threads/test/TestHelpers.h:75 (kodi-test+0x00000149648d)
    #2 TestThreadLocal_Heap_Test::TestBody() /home/vagrant/kodi-16.1~rc2+dfsg1/xbmc/threads/test/TestThreadLocal.cpp:131 (kodi-test+0x00000149648d)

  Location is heap block of size 672 at 0x7d580000d500 allocated by main thread:
    #0 operator new(unsigned long) <null> (libtsan.so.0+0x000000025723)
    #1 thread::thread(IRunnable&) /home/vagrant/kodi-16.1~rc2+dfsg1/xbmc/threads/test/TestHelpers.h:75 (kodi-test+0x00000149648d)
    #2 TestThreadLocal_Heap_Test::TestBody() /home/vagrant/kodi-16.1~rc2+dfsg1/xbmc/threads/test/TestThreadLocal.cpp:131 (kodi-test+0x00000149648d)

  Mutex M1256501375458662192 is already destroyed.

  Thread T92 'DumbThread' (tid=32657, running) created by main thread at:
    #0 pthread_create <null> (libtsan.so.0+0x000000026e64)
    #1 CThread::SpawnThread(unsigned int) /home/vagrant/kodi-16.1~rc2+dfsg1/xbmc/threads/platform/pthreads/ThreadImpl.cpp:50 (kodi-test+0x0000014b1494)

SUMMARY: ThreadSanitizer: data race ??:0 __interceptor_pthread_mutex_unlock
==================
==================
WARNING: ThreadSanitizer: unlock of an unlocked mutex (or by a wrong thread) (pid=32291)
    #0 pthread_mutex_unlock <null> (libtsan.so.0+0x000000034c7b)
    #1 XbmcThreads::pthreads::RecursiveMutex::unlock() /home/vagrant/kodi-16.1~rc2+dfsg1/xbmc/threads/platform/pthreads/CriticalSection.h:51 (kodi-test+0x0000014b2d1f)
    #2 XbmcThreads::CountingLockable<XbmcThreads::pthreads::RecursiveMutex>::unlock() /home/vagrant/kodi-16.1~rc2+dfsg1/xbmc/threads/Lockables.h:62 (kodi-test+0x0000014b2d1f)
    #3 XbmcThreads::UniqueLock<CCriticalSection>::unlock() /home/vagrant/kodi-16.1~rc2+dfsg1/xbmc/threads/Lockables.h:138 (kodi-test+0x0000014b2d1f)
    #4 CSingleLock::Leave() /home/vagrant/kodi-16.1~rc2+dfsg1/xbmc/threads/SingleLock.h:41 (kodi-test+0x0000014b2d1f)
    #5 CThread::staticThread(void*) /home/vagrant/kodi-16.1~rc2+dfsg1/xbmc/threads/Thread.cpp:140 (kodi-test+0x0000014b2d1f)

  Location is heap block of size 672 at 0x7d580000d500 allocated by main thread:
    #0 operator new(unsigned long) <null> (libtsan.so.0+0x000000025723)
    #1 thread::thread(IRunnable&) /home/vagrant/kodi-16.1~rc2+dfsg1/xbmc/threads/test/TestHelpers.h:75 (kodi-test+0x00000149648d)
    #2 TestThreadLocal_Heap_Test::TestBody() /home/vagrant/kodi-16.1~rc2+dfsg1/xbmc/threads/test/TestThreadLocal.cpp:131 (kodi-test+0x00000149648d)

  Mutex M8941 (0x7d580000d730) created at:
    #0 pthread_mutex_init <null> (libtsan.so.0+0x000000027865)
    #1 XbmcThreads::pthreads::RecursiveMutex::RecursiveMutex() /home/vagrant/kodi-16.1~rc2+dfsg1/xbmc/threads/platform/pthreads/CriticalSection.h:45 (kodi-test+0x0000014b24f0)
    #2 XbmcThreads::CountingLockable<XbmcThreads::pthreads::RecursiveMutex>::CountingLockable() /home/vagrant/kodi-16.1~rc2+dfsg1/xbmc/threads/Lockables.h:57 (kodi-test+0x0000014b24f0)
    #3 CCriticalSection::CCriticalSection() /home/vagrant/kodi-16.1~rc2+dfsg1/xbmc/threads/platform/pthreads/CriticalSection.h:66 (kodi-test+0x0000014b24f0)
    #4 CThread::CThread(IRunnable*, char const*) /home/vagrant/kodi-16.1~rc2+dfsg1/xbmc/threads/Thread.cpp:63 (kodi-test+0x0000014b24f0)

SUMMARY: ThreadSanitizer: unlock of an unlocked mutex (or by a wrong thread) ??:0 __interceptor_pthread_mutex_unlock
==================
[       OK ] TestThreadLocal.Heap (5 ms)
...

I need to test if the problem still exists in upstream master and verify if
it is not a false positive TSAN warning and the root cause is indeed here.

I'm setting the bug's severity to normal because while it is reproducible
the FTBFS does not happen on Debian's buildds thus does not prevent
rebuilding the package when needed. The bug also seem to have no effect on
the software in normal use.

Cheers,
Balint



More information about the pkg-multimedia-maintainers mailing list