Hi - FB Sprint

Mersenneplustwo home page
log in

Advanced search

Message boards : Number crunching : Hi - FB Sprint

Previous · 1 · 2 · 3
Author Message
Vato
Send message
Joined: 28 May 10
Posts: 3
Credit: 101,886
RAC: 159
Message 475 - Posted: 6 Jun 2020, 12:37:54 UTC

you are very generous donating someone elses time and effort

davidBAM
Send message
Joined: 3 Jun 19
Posts: 8
Credit: 3,376,390
RAC: 2,147
Message 476 - Posted: 6 Jun 2020, 13:08:25 UTC - in response to Message 475.

1. The wanless project has a history of losing work even running open outside of challenges. Why? Bearnol's 'trusty server' isn't trusty at all & has been known to have downtime in excess of his own 48hr deadline.

2. Bearnol got no advance notice of being selected for the Sprint. Very shoddy from Seb. Had he been asked, I would hope he would have declined as he would have known that challenges attract competitive crunchers and that means bunkering. Larger projects know better how to govern that to suit their capabilities.

3. To enable server processing of completed results now would simply mean that 157,000 results will be past deadline. To extend the deadlines as a one-off would mean that some of them would count but, with the Transitioner Backlog now showing 36hrs and only 32hrs remaining on the challenge, I suspect even that would render this Sprint null and void.

Profile UBT - Timbo
Send message
Joined: 14 Jan 07
Posts: 5
Credit: 432,895
RAC: 343
Message 477 - Posted: 6 Jun 2020, 14:37:13 UTC - in response to Message 476.
Last modified: 6 Jun 2020, 15:36:25 UTC

1. The wanless project has a history of losing work even running open outside of challenges. Why? Bearnol's 'trusty server' isn't trusty at all & has been known to have downtime in excess of his own 48hr deadline.


Hi davidBAM

I'm not aware of these issues, personally, but clearly allowing this project to be involved with FB had implcations that clearly were not considered properly...

2. Bearnol got no advance notice of being selected for the Sprint. Very shoddy from Seb. Had he been asked, I would hope he would have declined as he would have known that challenges attract competitive crunchers and that means bunkering. Larger projects know better how to govern that to suit their capabilities.


At 12:47 UTC on Thursday 4th June, (just over 8 hours BEFORE the Sprint started and AFTER a large number of tasks were downloaded by members "bunkering"), "bearnol" was advised about the Sprint and within an hour had responded:

"Don’t know how well the system’s going to cope... but I’m doing my best"

So, perhaps bearnol should have kept an eye on the downloads being sent out in advance of the "official start time"...

If needed, he could have made a settings change, (to maybe limit downloads, extend deadlines, etc) but that didn't happen...


3. To enable server processing of completed results now would simply mean that 157,000 results will be past deadline. To extend the deadlines as a one-off would mean that some of them would count but, with the Transitioner Backlog now showing 36hrs and only 32hrs remaining on the challenge, I suspect even that would render this Sprint null and void.


I assume that retrospective credits can be awarded by someone running a script to recognise successfully completed tasks that were uploaded too late - and to then award credits...so some good can come out of this...not that this will benefit any team credits in the Sprint "window" - unless Sebastien extends the Sprint of course.

I guess it's down to Sebastien as to whether to declare this a "cancelled Sprint"...but it is a great shame that what should be a nice competitive weekend of crunching has turned into a waste of time and energy for those involved.


Back of fag-packet calculation: 157,000 taska at about 1.5 hours each = 235,500 hours of crunching. BOINCstats says there are about 150 active members, so that's about 1,570 hours "wasted" computing power for each and every active member. One can only guess at the amount of wasted electricity that has been consumed... :-(...and ALL the data that has been processed, collected and uploaded that have all been "binned".

regards
Tim
____________

[H]auntjemima
Send message
Joined: 18 Apr 18
Posts: 3
Credit: 2,712,049
RAC: 0
Message 478 - Posted: 6 Jun 2020, 15:27:03 UTC - in response to Message 472.

challenge bunkering causes problems on a small project - again

lots of BAWWWWW about "unfairness" to bunkerers and how it makes the project look BADDDDD

i'm sorry, but expecting the admin to jump through hoops to help is unreasonable when presented with this behaviour


Ridiculous reply.

My suggestion would have been to prevent new tasks from coming out until the backlog was taken care of. I have uploaded thousands of tasks here at a time without issue, so it can handle quite a bit at a time. Just "turning things off" and avoiding it is crazy.

VietOZ
Send message
Joined: 2 Apr 18
Posts: 4
Credit: 8,106,604
RAC: 3,799
Message 479 - Posted: 6 Jun 2020, 15:33:26 UTC - in response to Message 477.

One can only guess at the amount of wasted electricity that has been consumed... :-(

regards
Tim


Tim, since this project is a quorum 2 i believe, you'll have to also account in the users that waiting for his/her pendings to clear. If it'll ever be??

As for the server's issues outside of challenges/competition. I can confirm it. A while back, me and SaM (our team captain), were trying to gain some points on the Marathon. We both were running about 500 threads each, got a "could not open database" error that lasted for a few days and all of our work went down the drain.
Another time, I won't mention how many WUs, I gambled and did a speculative bunker. Server went poop for almost a week ... all work gone again. Ok, I took a chance... can't blame anybody.. cool.

There's also another issue that could be more like BOINC's code issue, but if other projects can minimize the errors ... i don't understand why WEP couldn't. The issue was/is if you have a high core counts machine, I'd say more than 32 threads, it almost guarantee that you'll get a bunch of errors saying something like "file exit too long" (something like that, i can't remember exactly the words). The higher threads you have, the longer it took for the errors to phase out before everything running smoothly. My 64 threads machines usually took about 1/2 a day getting errors before it can be crunching full speed.Y'all don't have to take my words for it, attach a 64 threads machine and run it for a day and you'll see the issue still last until this day. So basically, when you want to contribute to this project and have a high thread count machine, expect to wasted 1/2 of a day of electricity.

Kiska
Send message
Joined: 13 Apr 12
Posts: 3
Credit: 4,187
RAC: 0
Message 480 - Posted: 6 Jun 2020, 15:34:01 UTC

This is a great flatline:



And because the site is using http, I have to use 301 redirects to get the image to load properly :D

Profile marsinph
Send message
Joined: 13 Apr 18
Posts: 17
Credit: 196,444
RAC: 0
Message 481 - Posted: 6 Jun 2020, 16:32:55 UTC - in response to Message 480.

This is a great flatline:



And because the site is using http, I have to use 301 redirects to get the image to load properly :D





Hello Kiska,

It is normal. Beranol have disable it. Intentionally disable and the worst he wrote he is not intend to restart it "...for a little while...."

ChelseaOilman
Send message
Joined: 1 Mar 20
Posts: 5
Credit: 3,166,628
RAC: 574
Message 484 - Posted: 6 Jun 2020, 16:46:17 UTC - in response to Message 479.

There's also another issue that could be more like BOINC's code issue, but if other projects can minimize the errors ... i don't understand why WEP couldn't. The issue was/is if you have a high core counts machine, I'd say more than 32 threads, it almost guarantee that you'll get a bunch of errors saying something like "file exit too long" (something like that, i can't remember exactly the words). The higher threads you have, the longer it took for the errors to phase out before everything running smoothly. My 64 threads machines usually took about 1/2 a day getting errors before it can be crunching full speed.Y'all don't have to take my words for it, attach a 64 threads machine and run it for a day and you'll see the issue still last until this day. So basically, when you want to contribute to this project and have a high thread count machine, expect to wasted 1/2 of a day of electricity.

I saw this happen on my 2 TR 2990WX boxes. That issue is clearly still there.

crashtech
Send message
Joined: 13 Jul 17
Posts: 4
Credit: 14,212
RAC: 0
Message 485 - Posted: 6 Jun 2020, 16:59:33 UTC - in response to Message 475.
Last modified: 6 Jun 2020, 17:08:08 UTC

I've removed this project from my Linux hosts, and my suggestion would be that other Formula BOINC participants do so as well, to leave this project back to its usual number of dedicated participants.

Cheers to all!

VietOZ
Send message
Joined: 2 Apr 18
Posts: 4
Credit: 8,106,604
RAC: 3,799
Message 486 - Posted: 6 Jun 2020, 17:15:58 UTC - in response to Message 484.

Chelsea,
The trick to avoid, not all, but majority of the errors is start out with less than 32 threads. Says, starts 30 threads first ... then let it runs for about 10 minutes or so .. then start another 30 threads ... and so on. It's annoying to babysit each machine when I have to run this project, but better than wasting electricity.

Bearnol,
These issues like that need to be address. I know you're a 1 man show, but work on it when you can instead of ignoring it. Each error will cause BOINC to increase time to the next "update". You know it ... and if we get enough errors ... the next update could be a day later. Which means the computer just sit there idle waiting for the next update. More advance users will then write scripts to trickle the update every N minutes/seconds. And if a bunch of users doing that, it's like DDoS to your server.
My point is, by ignoring the problem, it can be backfire. You forced users to find ways to make it work for them because there's no one addressing the problem/issue.

Profile marsinph
Send message
Joined: 13 Apr 18
Posts: 17
Credit: 196,444
RAC: 0
Message 488 - Posted: 6 Jun 2020, 17:50:45 UTC - in response to Message 486.

Chelsea,
The trick to avoid, not all, but majority of the errors is start out with less than 32 threads. Says, starts 30 threads first ... then let it runs for about 10 minutes or so .. then start another 30 threads ... and so on. It's annoying to babysit each machine when I have to run this project, but better than wasting electricity.

Bearnol,
These issues like that need to be address. I know you're a 1 man show, but work on it when you can instead of ignoring it. Each error will cause BOINC to increase time to the next "update". You know it ... and if we get enough errors ... the next update could be a day later. Which means the computer just sit there idle waiting for the next update. More advance users will then write scripts to trickle the update every N minutes/seconds. And if a bunch of users doing that, it's like DDoS to your server.
My point is, by ignoring the problem, it can be backfire. You forced users to find ways to make it work for them because there's no one addressing the problem/issue.


Sorry, but outside de problem of disabling server !
You write about a DDos ??? There is no any such report !
I suggest you make a new post for the technical problem you encouter

Profile marsinph
Send message
Joined: 13 Apr 18
Posts: 17
Credit: 196,444
RAC: 0
Message 489 - Posted: 6 Jun 2020, 17:52:02 UTC - in response to Message 484.

There's also another issue that could be more like BOINC's code issue, but if other projects can minimize the errors ... i don't understand why WEP couldn't. The issue was/is if you have a high core counts machine, I'd say more than 32 threads, it almost guarantee that you'll get a bunch of errors saying something like "file exit too long" (something like that, i can't remember exactly the words). The higher threads you have, the longer it took for the errors to phase out before everything running smoothly. My 64 threads machines usually took about 1/2 a day getting errors before it can be crunching full speed.Y'all don't have to take my words for it, attach a 64 threads machine and run it for a day and you'll see the issue still last until this day. So basically, when you want to contribute to this project and have a high thread count machine, expect to wasted 1/2 of a day of electricity.

I saw this happen on my 2 TR 2990WX boxes. That issue is clearly still there.



What is relation between your errors ans the server who is shutted down ???

Profile marsinph
Send message
Joined: 13 Apr 18
Posts: 17
Credit: 196,444
RAC: 0
Message 490 - Posted: 6 Jun 2020, 17:58:11 UTC - in response to Message 485.

I've removed this project from my Linux hosts, and my suggestion would be that other Formula BOINC participants do so as well, to leave this project back to its usual number of dedicated participants.

Cheers to all!



Hello Crash,
I understand. It is sad.
All now, I repeat ALL, absolutely ALL is now in hand of Bearnol.
Also me , I am very disppointed of such inaction from admin.
Take care about health. Not fully disconnect WEP.
I suggest to set your boinc manager on "suspend" so you keep account.
Also your PIS, and more important your CPID on boincstats
Best regards

Profile UBT - Timbo
Send message
Joined: 14 Jan 07
Posts: 5
Credit: 432,895
RAC: 343
Message 491 - Posted: 6 Jun 2020, 19:45:12 UTC - in response to Message 488.


You write about a DDos ??? There is no any such report !


Hi marsinph

I think you mis-understand.

The comment is not "about" a DDoS problem.

It is about maybe a few people who could make a script, (via the command-line on their hosts) that causes BOINC Manager to "update" this project every few seconds.

This action is "like" a DDoS, as it causes the project server to have to react to the "update" command from each host.

If one person issues an "update" command, any server can deal with that. But if 100, 1000 or 100,000 hosts all do this continuously, the server has no time to do it's "real" job and it has to "react" to 100, 1000, or 100,000 hosts, all expecting a response.

This action can bring a server down...in a similar way to what happened when the Sprint started on Thursday night and maybe 150+ active members all started "updating" their hosts and trying to upload thousands of tasks.

regards
Tim
____________

Cruncher Pete
Send message
Joined: 2 Jan 08
Posts: 2
Credit: 2,653,568
RAC: 107
Message 498 - Posted: 6 Jun 2020, 23:36:47 UTC

I am not an expert like marsinph so I am not going to waste words here.

As far as Wanless and bearnol is concerned, he has achieved what he wanted. He is getting too much help that he does not need. Accordingly I am deleting the project and if I hear the name bearnol anywhere on the web, I will mark him Banned.

ChelseaOilman
Send message
Joined: 1 Mar 20
Posts: 5
Credit: 3,166,628
RAC: 574
Message 499 - Posted: 7 Jun 2020, 0:10:28 UTC

So much hate. Makes me wonder why any project would want to be associated with Formula BOINC. There's really very little to be gained.

Profile UBT - Timbo
Send message
Joined: 14 Jan 07
Posts: 5
Credit: 432,895
RAC: 343
Message 501 - Posted: 11 Jun 2020, 17:58:48 UTC - in response to Message 477.

3. To enable server processing of completed results now would simply mean that 157,000 results will be past deadline. To extend the deadlines as a one-off would mean that some of them would count but, with the Transitioner Backlog now showing 36hrs and only 32hrs remaining on the challenge, I suspect even that would render this Sprint null and void.


I assume that retrospective credits can be awarded by someone running a script to recognise successfully completed tasks that were uploaded too late - and to then award credits...so some good can come out of this...not that this will benefit any team credits in the Sprint "window" - unless Sebastien extends the Sprint of course.

regards
Tim


Hi all

In the last day or so, I was able to upload and report a large batch of tasks that had been downloaded prior to the Sprint starting...but which were not able to be uploaded once the Sprint started, due to the project website not working as expected.

And these do seem to be getting validated and being awarded credits....even 5 days after their deadlines !

Hopefully others are in the same boat and will see some reward for their crunching efforts.

If that is the case then "kudos" to bearnol for perhaps twaeking the server to allow this.

And it's nice to see that the server is now back up and running OK.

regards
Tim
____________

Profile marsinph
Send message
Joined: 13 Apr 18
Posts: 17
Credit: 196,444
RAC: 0
Message 502 - Posted: 11 Jun 2020, 18:07:35 UTC - in response to Message 498.

I am not an expert like marsinph so I am not going to waste words here.

As far as Wanless and bearnol is concerned, he has achieved what he wanted. He is getting too much help that he does not need. Accordingly I am deleting the project and if I hear the name bearnol anywhere on the web, I will mark him Banned.



Hello Pete,
I have learn from you !!! You remember...2018...
I see you are very angry. I am also.
Now, all seems it works again with the usual problems. But no any reaction, no any explanation from Bearnol !!! Nothing !!! In his own Ivoor tower...
To compare 2019 , the very little PRJ Xanson", had aske to not participate on Sprint.
Here Bearnol, asked to participate.
There are only two possibilities : Sebastien (FB) is not responsible, or Bearnol play a very strange game.
I trust Seb (also when I not agree), but this time. ALL, I repeat ALL comes from
Bearnol who also published he did not want to retsart service.
Nice mentality !!!

Seb, please, I really think you need to cancel this sprint, exclude the PRJ.
I know, I will have enemy , because some teams will loose points.
But this suggestionis already suggested by UK Boinc Team who have the most to loose !!!

Bearnol, we wait you unhide !!!

Philippe

All of you, take care about your health

Profile marsinph
Send message
Joined: 13 Apr 18
Posts: 17
Credit: 196,444
RAC: 0
Message 503 - Posted: 11 Jun 2020, 18:20:24 UTC - in response to Message 501.

3. To enable server processing of completed results now would simply mean that 157,000 results will be past deadline. To extend the deadlines as a one-off would mean that some of them would count but, with the Transitioner Backlog now showing 36hrs and only 32hrs remaining on the challenge, I suspect even that would render this Sprint null and void.


I assume that retrospective credits can be awarded by someone running a script to recognise successfully completed tasks that were uploaded too late - and to then award credits...so some good can come out of this...not that this will benefit any team credits in the Sprint "window" - unless Sebastien extends the Sprint of course.

regards
Tim


Hi all

In the last day or so, I was able to upload and report a large batch of tasks that had been downloaded prior to the Sprint starting...but which were not able to be uploaded once the Sprint started, due to the project website not working as expected.

And these do seem to be getting validated and being awarded credits....even 5 days after their deadlines !

Hopefully others are in the same boat and will see some reward for their crunching efforts.

If that is the case then "kudos" to bearnol for perhaps twaeking the server to allow this.

And it's nice to see that the server is now back up and running OK.

regards
Tim


Hello Tim,
Also for me, WU out of deadline (well reported) are validated after deadline. So Bearnol have extend.
But like you say, no any reaction, no any comments, no any excuses, no any explanations, NOTHING only hide !
It is why i not agree.

A few days ago you suggested Sebastien to cancel the sprint. Your team will be the most impacted, but it is honnest for everyone.

Bearnol, unhide and explain your goal !!!
If you not more need the Boinc power, then remove your project (like Xanson did with the biggest honnor latest year).

Wait and see !!!

Previous · 1 · 2 · 3
Post to thread

Message boards : Number crunching : Hi - FB Sprint


Return to WEP-M+2 Project main page


Copyright © 2020 M+2 Group