Bug#597133: gvfsd-dav still crashes on attempt to mount shared folder
Alexey Slepov
sir-lexa at yandex.ru
Sun May 26 20:51:05 UTC 2013
Package: gvfs
Version: 1.12.3-4
Followup-For: Bug #597133
Hello!
This is still an issue in stable Debian 7.0.
I have three computers at home with Debian 7.0 and want to share "Public"
folders.
Nautilus shows an error "DBus error org.freedesktop.DBus.Error.NoReply: Message
did not receive a reply (timeout by message bus)".
dmesg shows lines like:
[ 6968.600842] gvfsd-dav[6554]: segfault at 0 ip 00007fdfbbdd73e6 sp
00007fff4227d588 error 4 in libc-2.13.so[7fdfbbd58000+180000]
It happens not always. Approximately each 10th time is successful and folder is
mounted.
Strange, but when using valgrind approximately every 2nd time is successful.
Using valgrind with gvfsd-dav executable like this:
user at debian-1:~$ cat /usr/lib/gvfs/gvfsd-dav
#! /bin/bash
LANG=C valgrind /usr/lib/gvfs/gvfsd-dav.original $* > /home/user/gvfsd-dav.log
2>&1
shows this:
==7080== Memcheck, a memory error detector
==7080== Copyright (C) 2002-2011, and GNU GPL'd, by Julian Seward et al.
==7080== Using Valgrind-3.7.0 and LibVEX; rerun with -h for copyright info
==7080== Command: /usr/lib/gvfs/gvfsd-dav.original --spawner :1.9
/org/gtk/gvfs/exec_spaw/35
==7080==
==7080== Invalid read of size 1
==7080== at 0x4C2A001: strcmp (mc_replace_strmem.c:711)
==7080== by 0x8DDA003: avahi_service_resolver_event (in /usr/lib/x86_64
-linux-gnu/libavahi-client.so.3.2.9)
==7080== by 0x8DD5D33: ??? (in /usr/lib/x86_64-linux-gnu/libavahi-
client.so.3.2.9)
==7080== by 0x527D53D: dbus_connection_dispatch (in /lib/x86_64-linux-
gnu/libdbus-1.so.3.7.2)
==7080== by 0x8DDC535: ??? (in /usr/lib/x86_64-linux-gnu/libavahi-
client.so.3.2.9)
==7080== by 0x89C065F: ??? (in /usr/lib/x86_64-linux-gnu/libavahi-
glib.so.1.0.2)
==7080== by 0x5F12354: g_main_context_dispatch (in /lib/x86_64-linux-
gnu/libglib-2.0.so.0.3200.4)
==7080== by 0x5F12687: ??? (in /lib/x86_64-linux-
gnu/libglib-2.0.so.0.3200.4)
==7080== by 0x5F12A81: g_main_loop_run (in /lib/x86_64-linux-
gnu/libglib-2.0.so.0.3200.4)
==7080== by 0x40EA2A: daemon_main (daemon-main.c:300)
==7080== by 0x40718F: main (daemon-main-generic.c:39)
==7080== Address 0x0 is not stack'd, malloc'd or (recently) free'd
==7080==
==7080==
==7080== Process terminating with default action of signal 11 (SIGSEGV)
==7080== Access not within mapped region at address 0x0
==7080== at 0x4C2A001: strcmp (mc_replace_strmem.c:711)
==7080== by 0x8DDA003: avahi_service_resolver_event (in /usr/lib/x86_64
-linux-gnu/libavahi-client.so.3.2.9)
==7080== by 0x8DD5D33: ??? (in /usr/lib/x86_64-linux-gnu/libavahi-
client.so.3.2.9)
==7080== by 0x527D53D: dbus_connection_dispatch (in /lib/x86_64-linux-
gnu/libdbus-1.so.3.7.2)
==7080== by 0x8DDC535: ??? (in /usr/lib/x86_64-linux-gnu/libavahi-
client.so.3.2.9)
==7080== by 0x89C065F: ??? (in /usr/lib/x86_64-linux-gnu/libavahi-
glib.so.1.0.2)
==7080== by 0x5F12354: g_main_context_dispatch (in /lib/x86_64-linux-
gnu/libglib-2.0.so.0.3200.4)
==7080== by 0x5F12687: ??? (in /lib/x86_64-linux-
gnu/libglib-2.0.so.0.3200.4)
==7080== by 0x5F12A81: g_main_loop_run (in /lib/x86_64-linux-
gnu/libglib-2.0.so.0.3200.4)
==7080== by 0x40EA2A: daemon_main (daemon-main.c:300)
==7080== by 0x40718F: main (daemon-main-generic.c:39)
==7080== If you believe this happened as a result of a stack
==7080== overflow in your program's main thread (unlikely but
==7080== possible), you can try to increase the size of the
==7080== main thread stack using the --main-stacksize= flag.
==7080== The main thread stack size used in this run was 8388608.
==7080==
==7080== HEAP SUMMARY:
==7080== in use at exit: 259,651 bytes in 2,095 blocks
==7080== total heap usage: 3,516 allocs, 1,421 frees, 479,718 bytes allocated
==7080==
==7080== LEAK SUMMARY:
==7080== definitely lost: 0 bytes in 0 blocks
==7080== indirectly lost: 0 bytes in 0 blocks
==7080== possibly lost: 56,133 bytes in 359 blocks
==7080== still reachable: 203,518 bytes in 1,736 blocks
==7080== suppressed: 0 bytes in 0 blocks
==7080== Rerun with --leak-check=full to see details of leaked memory
==7080==
==7080== For counts of detected and suppressed errors, rerun with: -v
==7080== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 8 from 6)
and sometimes this:
==6958== Memcheck, a memory error detector
==6958== Copyright (C) 2002-2011, and GNU GPL'd, by Julian Seward et al.
==6958== Using Valgrind-3.7.0 and LibVEX; rerun with -h for copyright info
==6958== Command: /usr/lib/gvfs/gvfsd-dav.original --spawner :1.9
/org/gtk/gvfs/exec_spaw/30
==6958==
gvfsd-dav.original: ../avahi-common/dbus-watch-glue.c:91: request_dispatch:
Assertion `dbus_connection_get_dispatch_status(d->connection) ==
DBUS_DISPATCH_DATA_REMAINS' failed.
==6958==
==6958== HEAP SUMMARY:
==6958== in use at exit: 259,382 bytes in 2,092 blocks
==6958== total heap usage: 3,449 allocs, 1,357 frees, 477,948 bytes allocated
==6958==
==6958== LEAK SUMMARY:
==6958== definitely lost: 0 bytes in 0 blocks
==6958== indirectly lost: 0 bytes in 0 blocks
==6958== possibly lost: 56,784 bytes in 361 blocks
==6958== still reachable: 202,598 bytes in 1,731 blocks
==6958== suppressed: 0 bytes in 0 blocks
==6958== Rerun with --leak-check=full to see details of leaked memory
==6958==
==6958== For counts of detected and suppressed errors, rerun with: -v
==6958== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 8 from 6)
So the behaviour is each time different.
-- System Information:
Debian Release: 7.0
APT prefers stable-updates
APT policy: (500, 'stable-updates'), (500, 'stable')
Architecture: amd64 (x86_64)
Foreign Architectures: i386
Kernel: Linux 3.2.0-4-amd64 (SMP w/2 CPU cores)
Locale: LANG=ru_RU.utf8, LC_CTYPE=ru_RU.utf8 (charmap=UTF-8)
Shell: /bin/sh linked to /bin/dash
Versions of packages gvfs depends on:
ii gvfs-common 1.12.3-4
ii gvfs-daemons 1.12.3-4
ii gvfs-libs 1.12.3-4
ii libc6 2.13-38
ii libdbus-1-3 1.6.8-1
ii libglib2.0-0 2.33.12+really2.32.4-5
ii libudev0 175-7.2
gvfs recommends no packages.
Versions of packages gvfs suggests:
ii gvfs-backends 1.12.3-4
-- no debconf information
More information about the pkg-gnome-maintainers
mailing list