A superintelligence is a hypothetical agent capable of performing nearly all intellectual tasks much better than any human can today. If a superintelligence comes to exist, it could conceivably be either a machine (created through substantial progress in artificial intelligence) or biological entity (created through genetic engineering or other human modification).
Since intelligence is the distinctive trait that has enabled humans to develop a civilization and become the dominant species on Earth, the development of agents that are much smarter than us would arguably be the most significant event in human history.
While it is difficult to predict, or even to conceive, what a future in which such agents exist would look like, a number of philosophers and computer scientists have recently argued that the arrival of superintelligence, particularly machine superintelligence, could pose an existential risk. On the other hand, if these risks are avoided, a superintelligence could be greatly beneficial, and might enable many of the world’s problems to be solved.
Bostrom, Nick. 2015. Superintelligence: paths, dangers, strategies. Oxford: Oxford University Press.
Wikipedia. Friendly artificial intelligence..