I have this ugly code:
if ( v > 10 ) size = 6;
if ( v > 22 ) size = 5;
if ( v > 51 ) size = 4;
if ( v > 68 ) size = 3;
if ( v > 117 ) size = 2;
if ( v > 145 ) size = 1;
return size;
How can I get rid of the multiple if statements?
I have this ugly code:
if ( v > 10 ) size = 6;
if ( v > 22 ) size = 5;
if ( v > 51 ) size = 4;
if ( v > 68 ) size = 3;
if ( v > 117 ) size = 2;
if ( v > 145 ) size = 1;
return size;
How can I get rid of the multiple if statements?
You want a case statement here. This also means theres no chance of more than one ifs being true.
if ( v > 145 ) size = 1;
else if ( v > 117 ) size = 2;
else if ( v > 68 ) size = 3;
else if ( v > 51 ) size = 4;
else if ( v > 22 ) size = 5;
else if ( v > 10 ) size = 6;
return size;
This is better for your case.
Optionally you should choose Switch Case where ever possible
Update:
If you have analyzed the value of 'v' generally resides in lower range(<10) in most of the cases than you can add this.
if(v < 10) size = SOME_DEFAULT_VALUE;
else if ( v > 145 ) size = 1;
else if ( v > 117 ) size = 2;
else if ( v > 68 ) size = 3;
else if ( v > 51 ) size = 4;
else if ( v > 22 ) size = 5;
else if ( v > 10 ) size = 6;
further :
You can also alter the condition sequence, according to your analysis. If you know that most of the values are less than 10 and then in the second place most of values lie between 68-117, you can alter the condition sequence accordingly.
return v > 145 ? 1
: v > 117 ? 2
: v > 68 ? 3
: v > 51 ? 4
: v > 22 ? 5
: v > 10 ? 6
: "put inital size value here";
Here's my shot at it...
Update: Fixed. Previous Solution gave incorrect answers for exact values (10,22,51...). This one defaults to 6 for the if val < 10
static int Foo(int val)
{
//6, 5, 4, 3, 2 ,1
int[] v = new int[]{10,22,51,68,117,145};
int pos = Arrays.binarySearch(v, val-1);
if ( pos < 0) pos = ~pos;
if ( pos > 0) pos --;
return 6-pos;
}
How about such approach:
int getSize(int v) {
int[] thresholds = new int[]{145, 117, 68, 51, 22, 10};
for (int i = 0; i < thresholds.length; i++) {
if (v > thresholds[i]) return i+1;
}
return 1;
}
int[] arr = new int[] {145, 117, 68, 51, 22, 10};
for(int index = 0; index < arr.length; index++)
{
if(v > arr[index]) return 1 + index;
}
return defaultValue;
What about
int result = 6;
List<Integer> caps = Arrays.asList(22, 51, 68, 117, 145);
for (Integer cap : caps) {
result = (v > cap) ? result - 1: result;
}
return result;
Greetz, GHad
Using the NavigableMap
API :
NavigableMap<Integer, Integer> s = new TreeMap<Integer, Integer>();
s.put(10, 6);
s.put(22, 5);
s.put(51, 4);
s.put(68, 3);
s.put(117, 2);
s.put(145, 1);
return s.lowerEntry(v).getValue();
There are a ton of answers and suggestions here but I honestly don't see any of them "prettier" or "more elegant" than the original method.
If you had dozens or HUNDREDS of iterations to check then I could easily see going to some for loop but honestly, for the handful of comparisons you had, stick with the if's and move on. It's not that ugly.
Here is another way to solve the problem:
/**
* Last item of haystack can be minimum integer (Integer.MIN_VALUE)
* to check later whether a match was found or not
*/
public static int siz(int needle,int haystack[],int pos){
return (needle>haystack[pos])? ++pos: siz(needle,haystack,++pos);
}
Example usage:
int arr[] = {145,117,68,51,22,10,Integer.MIN_VALUE};
int size = siz(118,arr,0);
//Check if the item was found
if(arr.length!=size){
//success
}else{
//handle error
}
This is apparently a highly controversial post, it has even been on the DailyWTF site. The current score of this post (see for yourself by clicking between the up and down arrow if you've got 1000 or more rep) is
Here is an object-oriented solution, a class called Mapper<S,T>
that maps values from any type that implements comparable to any target type.
Syntax:
Mapper<String, Integer> mapper = Mapper.from("a","b","c").to(1,2,3);
// Map a single value
System.out.println(mapper.map("beef")); // 2
// Map a Collection of values
System.out.println(mapper.mapAll(
Arrays.asList("apples","beef","lobster"))); // [1, 2, 3]
Code:
public class Mapper<S extends Comparable<S>, T> {
private final S[] source;
private final T[] target;
// Builder to enable from... to... syntax and
// to make Mapper immutable
public static class Builder<S2 extends Comparable<S2>> {
private final S2[] data;
private Builder(final S2[] data){
this.data = data;
}
public <T2> Mapper<S2, T2> to(final T2... target){
return new Mapper<S2, T2>(this.data, target);
}
}
private Mapper(final S[] source, final T[] target){
final S[] copy = Arrays.copyOf(source, source.length);
Arrays.sort(copy);
this.source = copy;
this.target = Arrays.copyOf(target, target.length);
}
// Factory method to get builder
public static <U extends Comparable<U>, V> Builder<U> from(final U... items){
return new Builder<U>(items);
}
// Map a collection of items
public Collection<T> mapAll(final Collection<? extends S> input){
final Collection<T> output = new ArrayList<T>(input.size());
for(final S s : input){
output.add(this.map(s));
}
return output;
}
// map a single item
public T map(final S input){
final int sourceOffset = Arrays.binarySearch(this.source, input);
return this.target[
Math.min(
this.target.length-1,
sourceOffset < 0 ? Math.abs(sourceOffset)-2:sourceOffset
)
];
}
}
Edit: finally replaced the map() method with a more efficient (and shorter) version. I know: a version that searches partitions would still be faster for large arrays, but sorry: I'm too lazy.
If you think this is too bloated, consider this:
Sure, all of these features could be easily removed, but the code would be less complete, less usable or less stable.
My commenting ability isn't turned on yet, hopefully no one will say "rightfully" based on my answer...
Pretty-ing up the ugly code could/should be defined as trying to achieve:
IMO the answer given by org.life.java was the prettiest and extremely easy to read. I also liked the order in which the conditions were written, for reasons of reading and performance.
Looking over all the comments on this subject, at the time of my writing, it appears that only org.life.java raised the issue of performance (and maybe mfloryan, too, stating something would be "longer"). Certainly in most situations, and given this example it shouldn't bear a noticeable slowdown however you write it.
However, by nesting your conditions and optimally ordering the conditions can improve performance [worthwhile, particularly if this were looped].
All that being said, nesting and ordering conditions (that are more complex than your example) brought on by determination to achieve as fast as possible execution will often produce less readable code, and code that's harder to change. I refer again to #3, pragmatism... balancing the needs.
Is there an underlying mathematical rule to this? If so you should use that: but only if it comes from the problem domain, not just some formula that happens to fit the cases.
The original code seems fine to me, but if you don't mind multiple returns you might prefer a more tabular approach:
if ( v > 145 ) return 1;
if ( v > 117 ) return 2;
if ( v > 68 ) return 3;
if ( v > 51 ) return 4;
if ( v > 22 ) return 5;
if ( v > 10 ) return 6;
return ...; // The <= 10 case isn't handled in the original code snippet.
See the multiple return or not discussion in org.life.java's answer.
I realise this is ultimately the same as mfloryan's approach, except for the sorting improvement which improves maintainability and extensibility. And comments.
Sorry for the ridiculously mixed syntax -- this is supposed to be C but I cannot remember nor can be bothered to look it up.
This returns 7 for the <= 10 case, which is the logical extension although undefined. And I would rather much like it to return null in mfloryan's example if the code actually runs into the impossibility regime. It had better fail as early as possible.
int thresholds[] = sort({10,22,51,68,117,145});
// compares v -- retval is _supposed_ to be in reverse order!
for( int i = thresholds.length - 1, i >= 0, --i ) {
if( v > thresholds[i] ) return thresholds.length - i;
}
Note: the index counts down because I want to highlight the fact that the optimising compiler will automatically reverse the for statement so that the comparison to zero is done last. This means that even if you count up from zero, this algorithm will hold true. If the problem space tends to be small and you want to count the other way round (for optimising the loop), you will need to reverse the output of the sort function. This you will probably want to do in the case of large thresholds arrays.
Or maybe I should just simply put my money where my mouth is:
int thresholds[] = reverse(sort({10,22,51,68,117,145}));
// compares v -- retval is _supposed_ to be in reverse order!
for( int i = 0, i < thresholds.length,
++i ) {
if( v < thresholds[i] ) return 1+i;
}
There! Much nicer! I do realise that sort in C is really qsort and that means the reverse and sort functions are combined. (Inner Lisper in me is screaming that this is definitely not the most optimal in functional programming, but this is really so simple it does not warrant any more time than I have already given it.)
This is my code sample, using SortedSet. You initialise boundaries once.
SortedSet<Integer> boundaries = new SortedSet<Integer>;
boundaries.add(10);
boundaries.add(22);
boundaries.add(51);
boundaries.add(68);
boundaries.add(117);
boundaries.add(145);
Then use it subsequently this way for multiple values of v (and initialised size)
SortedSet<Integer> subset = boundaries.tailSet(v);
if( subset.size() != boundaries.size() )
size = subset.size() + 1;
Just for completeness, let me suggest that you could set up an array SIZES with 145 elements so the answer could be returned directly as SIZES[v]. Pardon me for not writing the whole thing out. You would have to make sure v was in range, of course.
The only reason I can think of for doing it that way would be if you were going to create the array once and use it thousands of time in an application that had to be really fast. I mention it as an example of a trade-off between memory and speed (not the problem it once was), and also between setup time and speed.
I have one more version for you. I don't really think it's the best one because it adds unnecessary complexity in the name of "performance" when I'm 100% sure this function will never be a performance hog (unless someone is calculating size in a tight loop a million times ...).
But I present it just because I thought performing a hard-coded binary search to be sort of interesting. It doesn't look very binary-y because there aren't enough elements to go very deep, but it does have the virtue that it returns a result in no more than 3 tests rather than 6 as in the original post. The return statements are also in order by size which would help with understanding and/or modification.
if (v > 68) {
if (v > 145) {
return 1
} else if (v > 117) {
return 2;
} else {
return 3;
}
} else {
if (v > 51) {
return 4;
} else if (v > 22) {
return 5;
} else {
return 6;
}
}
The most obvious problem with the OPs solution is branching, so I would suggest a polynomial regression. This will result in a nice branchless expression on the form
size = round(k_0 + k_1 * v + k_2 * v^2 + ...)
You will of course not get an exact result, but if you can tolerate some deviance it's a very performant alternative. Since the 'leave unmodified' behavior of to original function for values where v<10
is impossible to model with a polynomial, I took the liberty of assuming a zero-order hold interpolation for this region.
For a 45-degree polynomial with the following coefficients,
-9.1504e-91 1.1986e-87 -5.8366e-85 1.1130e-82 -2.8724e-81 3.3401e-78 -3.3185e-75 9.4624e-73 -1.1591e-70 4.1474e-69 3.7433e-67 2.2460e-65 -6.2386e-62 2.9843e-59 -7.7533e-57 7.7714e-55 1.1791e-52 -2.2370e-50 -4.7642e-48 3.3892e-46 3.8656e-43 -6.0030e-41 9.4243e-41 -1.9050e-36 8.3042e-34 -6.2687e-32 -1.6659e-29 3.0013e-27 1.5633e-25 -8.7156e-23 6.3913e-21 1.0435e-18 -3.0354e-16 3.8195e-14 -3.1282e-12 1.8382e-10 -8.0482e-09 2.6660e-07 -6.6944e-06 1.2605e-04 -1.7321e-03 1.6538e-02 -1.0173e-01 8.3042e-34 -6.2687e-32 -1.6659e-29 3.0013e-27 1.5633e-25 -8.7156e-23 6.3913e-21 1.0435e-18 -3.0354e-16 3.8195e-14 -3.1282e-12 1.8382e-10 -8.0482e-09 2.6660e-07 -6.6944e-06 1.2605e-04 -1.7321e-03 1.6538e-02 -1.0173e-01 3.6100e-01 -6.2117e-01 6.3657e+00
, you get a beautifully fitted curve:
And as you can see, you get an 1-norm error of just 1.73 across the whole range from 0 to 200*!
*Results for v∉[0,200]
may vary.
If you really want the fastest O(N) time solution for this particular answer this one is constant lookup.
final int minBoundary = 10;
final int maxBoundary = 145;
final int maxSize = 6;
Vector<Integer> index = new Vector<Integer>(maxBoundary);
// run through once and set the values in your index
subsequently
if( v > minBoundary )
{
size = (v > maxBoundary ) ? maxSize : index[v];
}
The issue with this is that it uses more memory and of course if maxBoundary is a lot bigger it will be very space inefficient (as well as take a longer time to initialise).
This may sometimes be the best solution for the situation.
Actually, if the sizes are likely to change, doing it in the database could be a good alternate strategy:
CREATE TABLE VSize (
LowerBound int NOT NULL CONSTRAINT PK_VSize PRIMARY KEY CLUSTERED,
Size int NOT NULL
)
INSERT VSize VALUES (10, 6)
INSERT VSize VALUES (22, 5)
INSERT VSize VALUES (51, 4)
INSERT VSize VALUES (68, 3)
INSERT VSize VALUES (117, 2)
INSERT VSize VALUES (145, 1)
And a stored procedure or function:
CREATE PROCEDURE VSizeLookup
@V int,
@Size int OUT
AS
SELECT TOP 1 @Size = Size
FROM VSize
WHERE @V > LowerBound
ORDER BY LowerBound
A clean up of mfloryan's answer:
int getSize( int v ) {
int r = 0;
for( int t : new int [] { 145,117,68,51,22,10}){
r++;
if( v > t ) { return r; }
}
return 1;
}
It is interesting that there are plenty of beautiful answers for a simple "ugly" question. I like mfloryan's answer best, however I would push it further by removing the hard-coded array inside the method. Something like,
int getIndex(int v, int[] descArray) {
for(int i = 0; i < descArray.length; i++)
if(v > descArray[i]) return i + 1;
return 0;
}
It now becomes more flexible and can process any given array in descending order and the method will find the index where the value 'v' belongs.
PS. I cannot comment yet on the answers.
So many answers and no one has suggested using Drools. if you think if's are ugly you should find Drools amazing.
The obvious answer is to use Groovy:
def size = { v -> [145,117,68,51,22,10].inject(1) { s, t -> v > t ? s : s + 1 } }
One liners are always better. Returns 7 for the undefined case where v <= 10.
7 - x>10 + x>22 + x>51 + x>68 + x>117 + x>145
where 7 is the default value (x <= 10).