At this point, machine and human intelligence are largely incomparable; the ways they process and store information seem to be directly antithetical. Machines have been meticulously designed from the bottom up to exert undying analytical skills. "Analytical" might not even be the proper choice of word as computers don't so much "break down" information as much as they simply need to process everything bit by bit without error or generalization. Human thinking however has evolved as highly generalized and built in the synthesis of information. Humans can more easily recall feelings, tendencies or situations without much attention to detail.
If a human is shown say a commercial for a kind of alcohol, he will certainly remember the general feelings and setting of the clip, the emotions of the actors and the general atmosphere with its human implicatures. An analytical machine however would be best suited to record the exact clothing and appearances of the actors, the specifics of the set of the commercial and perhaps the meticulous crafting of the bottle advertized; it would scarcely have any understanding of synthetical notes such as mood and atmosphere unless it had been elegantly been programmed to to make such conclusions based on physical data. Even then this machine would have to be in practical terms highly intelligent.
Of course already there is a strange double standard of our use of the word intelligent. We think of a highly intelligent machine as one that can make generalizations, while we think as an intelligent person as one that has a capacity to notice the analytical minutiae of life.
Regardless, it seems that human and machine intelligence are at a largely complimentary equilibrium. That is in some sense, we have invented computers that excel precisely at what we're so poor at. Of course on the other hand, it may have more to do with the fact that programming the generalizing skills that humans have into devices is difficult in management terms. Precise analytical computation seems epistemologically and otherwise as a precondition to synthesis as such synthesis would be impossible without extracting at least some data from reality. It may simply be that in the case of the human mind, the brain uses its analytical skills to construe a more digestible mental generalizations and then disposes of the specifics to save cerebral memory space.
To create such a machine would no doubt be the goal of many a programmer looking to create a master machine mind, but the actual mechanism of human thought may be somewhat more convoluted. Children often like toying with their friends by asking them questions like, "Where do you put toast in a toaster?" or "How many of each kind of animal did Moses put on the ark?" The fact that humans would ever fall victim to these kinds of basic trick questions is an absolute embarrassment to our entire race. Apparently humans are so blind-sighted in their synthetical and economical thinking that they have little qualms in riding roughshod over retrospectively integral data in following the conclusions they draw.
It may not be the essence of human thought, but there seems to be a strong element of association build into our cognition, or at least our abilities to recall information. Take the above trick question of "How many of each kind of animal did Moses put on the ark?". This question can trick some answerers by suggesting Moses as the ark's helmsman instead of Noah, but it might not do very well at all had it asked how many pairs of animals Louis XII or Ted Nugent had placed on board. Moses is of course a figure closely related to Noah in general thinking, both being disyllabic characters of the Old Testament.
It may be that humans only consciously realize bad analytical data after a certain point. Noah is the only character in popular culture who has ever really been associated with loading animals onto an ark, so anyone asked the question knows the intention behind it even if the asker has mis-said Noah's name. Moses may pass under the radar completely without conscious notice, but somehow anyone would recognize that Ted Nugent had never in all of his career floated a vessel of all the animals of the world. It may be that the brain does process analytically every element of the sentence, but an error is only consciously reported if it is so egregious and relevant.
This may seem like a kind of human flaw, but it's necessary to be able to disregard information to be able to properly generalize. To tell a machine to generalize is to tell a machine to lie. To draw a generalized conclusion, one must necessarily omit confounding data points. This shouldn't be too surprising, but it does mean that a synthesizing machine would need to be programmed a capability for "judgement" to exclude or ignore some input.
Humans do have such an implicit judgement and it seems that it mostly overrules any underlying analytical machinations. We generalize in a blink and are often not conscious of the underlying reasons or the data on which our generalizations are made. The conscious retention of analytical thinking is a habit that has to be more often than not knowingly cultivated. Nonetheless, humans can realize this habit and it makes us more eclectic that the current incarnations of our machine intelligence.
Human intelligence is also, for lack of a better word, useful. It may take quite a bit of work to make machine intelligence be used in ways beneficial to us in the process of programming and designing machines, but there is a nearly direct correspondence between human intelligence and human needs. The evolutionary outcome of humankind has given us the intellectual modules necessary to survive in our settings with what could be described as purpose and intention. Of course, natural selection has afforded us no abilities not built on the pretext of survival and reproduction, so our intelligence is often superficial and limited, but it is still uniquely fitted to our circumstances.
This is not grounds for human gloating. Human intelligence may be wider an more practicable than machines', but the brain's computational power is obviously peanuts in comparison with even basic computer coding. No savant in the world can come close to the speed or precision of Windows' basic ~750KB Calc program and "teaching" a machine to calculate Fibonacci numbers into the hundred millions is CompSci 101 and an easy task for the computational abilities of computers. The analytical superiority of machine intelligence isn't even worth noting due to its obviousness, the only question is how much greater should their generalizing skills be once humans manage to spark those kinds of developments. With these kinds of precise underpinnings, its hard to imagine humans remaining on top of their own game.