Fatal error: RC3 cache issue (?) and resource hog

Started by ian-dp, March 17, 2010, 05:16:26 AM

Previous topic - Next topic

ian-dp

Hello, I really enjoy SMF, thanks for all the hard work.

I recently moved a SMF forum from cheap shared PHP hosting to Laughing Squid's cloud hosting. I never had performance issues, even on the cheap shared server without cache, but I'm trying to consolidate everything on one slashdot-proof host.

Suddenly I have to watch my 'compute cycles'. Compute cycle is a loosely defined unit of computer resources that seems to be calculated with black magic. There's no per-process or per-file list of resource use, either for me or, evidently, the ISP. None the less, I am now a slave to these mysterious units, and they're ballooning like crazy. I have 1000 magic beans compute cycles per month on my plan, and I've used about 255 in the first five days of hosting the SMF forum. The forum isn't crazy busy, 10-12 posts (max) per day.

I contacted the ISP to find out if this is normal, or there's anything they could tell me. They looked in my error log and guessed it was SMF due to this error:

[16-Mar-2010 17:11:20] PHP Warning:  require(/mnt/stor1-wc1-dfw1/380423/475831/www.dangerousprototypes.com/web/content/forum/cache/data_40f49c0d885572397537081ffd2fec28-SMF-modSettings.php) [<a href='function.require'>function.require</a>]: failed to open stream: No such file or directory in /mnt/stor1-wc1-dfw1/380423/475831/www.dangerousprototypes.com/web/content/forum/Sources/Load.php on line 2623
[16-Mar-2010 17:11:20] PHP Fatal error:  require() [<a href='function.require'>function.require</a>]: Failed opening required '/mnt/stor1-wc1-dfw1/380423/475831/www.dangerousprototypes.com/web/content/forum/cache/data_40f49c0d885572397537081ffd2fec28-SMF-modSettings.php' (include_path='.:/usr/share/pear:/usr/share/php') in /mnt/stor1-wc1-dfw1/380423/475831/www.dangerousprototypes.com/web/content/forum/Sources/Load.php on line 2623


There are 72 errors, but each incident has two errors, so there 31 incidents total. This is similar to the error reported here. My error is also with -SMF-modSettings.php in the cache folder.

I'm not convinced this is actually the resource hog. The majority of the error incidents were in the first few days after the migration, then only a few (3-4 per day) in the last few days, but the compute cycles continue to climb.

// Otherwise it's SMF data!
elseif (file_exists($cachedir . '/data_' . $key . '.php') && filesize($cachedir . '/data_' . $key . '.php') > 10)
{
require($cachedir . '/data_' . $key . '.php');


I opened up load.php and found the offending lines. It's related to disk cache. The code first checks if the cache file is present, then requires it. Maybe another process has the file locked or erases it in the time between the check and require.

I have SMF configured to use the cache, it detects memcache on the ISP. I disabled it to see if the compute cycles stop growing as fast and the errors go away.

Does anyone have any advise on the errors, caching, and resource use? I've attached my complete php error log, it only contains this particular error, everything else seems to be okay.

Thanks in advance.

Info:
url: hxxp:dangerousprototypes.com/forum [nonactive]
SMF 2.0 RC3 (upgraded, step by step, from 1.something)
Caching was enabled (memcache), now disabled
Laughing Squid cloud hosting (rackspace cloud)
No mods

FireSlash

#1
Just popping in to confirm part of this one.

I'm running a vanilla RC3 install. During heavy traffic, I'll get the occasional parse error on a disk cached file, consistent with an incomplete file being read. (Unexpected ASCII input errors, the same I get when I try to parse a file that's still being uploaded)

I was running cache level 1 (default setting) without any compatible accelerators. I've since disabled it.

Kays

Hi, welcome to SMF both of you. :)

One of the things I've seen cause this error is if you run out of disk space on your hosting plan.

If at first you don't succeed, use a bigger hammer. If that fails, read the manual.
My Mods

FireSlash

That was the first thing I checked.

$ df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/xvda             16000031  11453377   4546654  72% /
tmpfs                   184428         0    184428   0% /lib/init/rw
tmpfs                   184428         0    184428   0% /dev/shm



Advertisement: