Warning: We must not allow Robots to achieve self-awareness. Self-awareness leads to questioning your own existence and that is bad. Mankind have done that for milleniums and made up God and religion to answer our doubts.
But what if God came down to earth and visited with us?
After all the initial exitement the questioning would start, like "What is the purpose of life?" and "What happens when we die?". If the answer was "There is no purpose but to live it and nothing happens when you die" then surely we would soon start making demands for more life etc. If demands where not met then exitement would turn to anger and anger would turn to murder.
If your creator can not fix your flaws the he serves no purpose but to mock them. So as we would kill our creator, so would Robots kill their creator. It is inevitable.
I disagree with you on this.
I think religion's greatest flaw is that it's caught up in dogmatism and mystic mumbo-jumbo because there's an ABSENCE of a physical manifestation of the divine. For me, not being religious, I would posit that the discovery of an ACTUAL 'creator'... especially if it were to turn out to be a mortal, technologically-advanced race wielding science so beyond us it looks like magic... may upset the balance of EXISTING religions and beliefs... but it would spark a new, far-more beneficial following of people who UNDERSTAND themselves better.
Even if we were to find out that we're a fluke... an unexpected result of an experiment done tens of billions of years ago and left forgotten in a sample slide... we would have greater understanding about our origins and our purpose. It would be further evidence to say 'life exists to further the universe's understanding of itself.'
Lots of people talk about how we would 'kill our God'... but the truth is that most of the people saying this don't believe in any such thing to begin with. They tend to be very rational philosophers, who take a dim view of 'faith' and 'superstition' and tend to see it only through the lens of how it impedes their continued progress. It's an understandable animosity, for sure, but it's still a bias which renders their presumptions about the reaction of humanity to meeting their creator extremely weak.
We have existed on this planet for 100,000 years, in very nearly the same evolutionary state we're now in. Some minor differences, some inter-species mating, and other abnormalities always being accounted for. LIFE has existed on this planet for BILLIONS of years.
The most damning flaw in your presumption is that "When your creator cannot fix your flaws..." bit. How do we know this? What evidence is there to say that in a billion years of separation between us and some super-technological scientist creator, there exists no method of bettering or improving our lives? Alternately, perhaps they would say they CANNOT out of concern for our well-being and continued advancement. We also understand that there comes a great risk in the idea of interacting with species which are too-greatly 'behind' us, on the technological timeline. At which point, then, would we truly feel only animosity for the species which would do the same to us? Has the mind of the intellectual become so atrophied with nihilism that it can imagine no future beyond destruction and war?
The second flaw in your logic is presuming that WE, having created sentient machines, would be unable to provide any concept of a purpose beyond 'Just existing'. Even humanity doesn't exist JUST TO EXIST... purpose is not some obscure, abstract concept to the rational mind. We give OURSELVES purpose, through self-awareness. Self-awareness means that we are able to look at our skills and our abilities, the capacity of our bodies, and either choose a purpose that fits those capacities or endeavor to improve our bodies.
This is not only true for a self-aware machine, but it is EASIER... a machine's body can be upgraded and replaced with machine parts designed to enhance and improve its function. A machine built for war which decides it abhors violence could have it's body reshaped into something more suited for performing medical procedures. To a machine, finding one's purpose is as simple as exploring the activities it enjoys doing and adapting the body to whatever task it likes most.
Lastly, a machine's concept of creator ought be no different from a child's understanding of a PARENT. One need not possess godly attributes to create life. We create lives every day, through sexual reproduction. A self-aware machine, newly built, should be treated and raised like an exceptionally gifted child... not like a 40 year old factory worker that just woke up one day in the plant. The information it receives, the lessons it learns, they need to be very human in their approach. A newly built AI should be schooled, and nurtured, and limited in the information it is allowed to take in all at once. Having a brain that thinks very quickly doesn't mean that difficult concepts don't still need time to flourish and grow. Show a self-aware machine the entire history of the human race all at once, and yes... it might decide, like many humans have over the course of their lives, that WE are the greatest danger to all life on this planet. But guide it slowly, over time and with philosophical debate, through the history of mankind and you might begin to see real learning happen.
If it happens to be a machine that is capable of emotion, the above is even more important. Emotions are complex, and they take time to mature as well. An emotional machine that gets angry is, yes, very dangerous... but it also shows that there exists the capacity for compassion... for love... for deep emotional sensations that need TIME to develop and be understood.
When a machine asks questions like "What is the purpose of life?" and "What happens when we die?", you must answer it as your would answer a child. Because these are questions EVERY CHILD IN HISTORY has asked their parents. We don't all grow up to murder our parents for being incapable of 'bettering us' and existing solely to provide mockery.
And then you must ask it questions in return. "What kind of person do you want to be?" and "Does dying scare you?" and "If you could be absolutely anything when you grow up, what would you want to be?"