Hi, so I am trying to get a steady reading, as the Roundline does not seem sensitive enough and is mainly at idle showing a zero value most of the time.
In other words the Roundline is always at minimum or Zero.
This code is certainly a bit messed up but it is based on my histogram meter which is really responsive, so I have added a few things in the effort to make the Roundline equally visually sensitive/responsive - but what I've got here does not seem to be making any visible difference:
There are many ways to be different - there is only one way to be yourself - be amazing at it
The law of averages says what it means; even if you get everything right, you will get something wrong. Therefore; self managing error trapping initiates another set of averages - amongst the errors, some of them will not be errors, instead those instances will appear to be "luck". One cannot complain of the 'appearance' of 'infinite regress of causation', even if it does not have a predictable pattern, only that it requires luck to achieve.
There is no AutoScale on a Roundline meter. For Net measures, you set a MaxValue in bits per second that reflects the maximum speed of your connection.
If you have a 100Mb/s connection from your ISP, a 1kb/s chunk of traffic is so small in that context that it really just rounds to zero visibly. It would take a meg of traffic to even get to 1%. I'm not sure what you are going for. I find that the Net measures don't really lend themselves to a visual representation using Bar or Roundline most of the time. How often are you actually using any significant part of your bandwidth, unless you are actively downloading / uploading a file? Surfing the web and other normal internet activities are not going to even budge a Roundline.
1.jpg
So I use "lights" to indicate activity, the text to show the amount, and although I have a Bar meter for each, they only are visually of much use when I'm actively doing something significant with my bandwidth. Downloading a file or something. 99% of the time, you are using far less than 1% of your total bandwidth.
AutoScale makes some (ok, not much in my view, but some) sense on a Histogram or Line meter, as those represent an elapsed period of time. So you are showing activity in the context of that time period. It's not so much "how much is it?", but "how much has it changed?" Bar and Roundline are a single point in time, and there is no relationship to what came before or after. AutoScale on Bar or Roundline would make no sense whatsoever. Then they just literally woudn't mean anything at all.
You do not have the required permissions to view the files attached to this post.
That makes a certain sense. Bummer for me, since I have been adding roundlines to anything it will serve a purpose on. It looks cool, etc. I was just hoping to see that relative "bounce" thinking AutoScale would capture the low data transfers then "autoscale" for when the bandwidth was really being used.
I get approximately 35Mbps down and 12 Mbps up with the Ookla tester. Even then the round line was shamefully unresponsive during the test.
There are many ways to be different - there is only one way to be yourself - be amazing at it
The law of averages says what it means; even if you get everything right, you will get something wrong. Therefore; self managing error trapping initiates another set of averages - amongst the errors, some of them will not be errors, instead those instances will appear to be "luck". One cannot complain of the 'appearance' of 'infinite regress of causation', even if it does not have a predictable pattern, only that it requires luck to achieve.
Mor3bane wrote:That makes a certain sense. Bummer for me, since I have been adding roundlines to anything it will serve a purpose on. It looks cool, etc. I was just hoping to see that relative "bounce" thinking AutoScale would capture the low data transfers then "autoscale" for when the bandwidth was really being used.
I get approximately 35Mbps down and 12 Mbps up with the Ookla tester. Even then the round line was shamefully unresponsive during the test.
Thanks jsmorley for the clarifications.
Well, we need to be sure that your roundlines are reacting correctly. Although as I said, 99% of the time you are using less than 1% of your total bandwidth, when you actually do the test on Speedtest.net or wherever, you should in theory get to 100%, if you have your MaxValue set right on the measures.
The goal is to set MaxValue to the total "bits" per second your bandwidth supports.
Note that MaxValue isn't required, as the Net measures will dynamically set the top of the "range" to the maximum value it has seen since it was started, but that means that until you actually DO use 100% of your bandwidth, you are getting percentage results that don't really mean much.
jsmorley wrote:Well, we need to be sure that your roundlines are reacting correctly. Although as I said, 99% of the time you are using less than 1% of your total bandwidth, when you actually do the test on Speedtest.net or wherever, you should in theory get to 100%, if you have your MaxValue set right on the measures.
The goal is to set MaxValue to the total "bits" per second your bandwidth supports.
Note that MaxValue isn't required, as the Net measures will dynamically set the top of the "range" to the maximum value it has seen since it was started, but that means that until you actually DO use 100% of your CPU, you are getting percentage results that don't really mean much.
That's cool. No worries. My solution was to change the RoundLine to NetTotal. It moves nicely now and considering I still have my Histogram meter for the more visual In and Out Net bps. The NetTotal will simply reflect activity on average, and it registers more often now. I have taken off the AverageSize parameter, and the visual is nicer. So, I am contented for now.
There are many ways to be different - there is only one way to be yourself - be amazing at it
The law of averages says what it means; even if you get everything right, you will get something wrong. Therefore; self managing error trapping initiates another set of averages - amongst the errors, some of them will not be errors, instead those instances will appear to be "luck". One cannot complain of the 'appearance' of 'infinite regress of causation', even if it does not have a predictable pattern, only that it requires luck to achieve.