Dennett’s Frame problem manifests that building an artificial intelligence kindred to humans intelligence is impossible.” I disagree with the claim that only based on the Frame problem a human-like Artificial Intelligence (from now on AI) cannot be made. Although the frame problem in the field of AI is very similar to its homonym in philosophy, the latter one challenges a wider epistemological question considerably.
In AI, the frame problem means the minimal logical representation of the implications of an action without representing a significant number of irrelevant effects and by-products of the action. For philosophers, this issue extends to the questions of how to determine the consequences of an action with the limited scope of reasoning? Alternatively, to ask it more broadly, how is human brain capable of decision-making merely based on relevant information without considering all non-relevant ones?
It means that the more modular a cognitive system is, the less considerable the issue of frame problem will be. Therefore, for non-modular cognitive systems like human brain the frame issue is quite significant; however, difficulty does not imply impossibility. In other words, under a modular view, it may be very hard to outline a functional specification of the human brain as an informational system, and therefore harder to make its akin AI. However, most of the difficulty may arise because of the employed view: modular. Shortfalls of modular view on brain and dissatisfaction with logical models of human cognition advanced the emergence of non-modular views. Therefore, it means that human brain-like AI may be achievable under various other non-modular views such as Connectionism, Evolutionary Connectionism, Parallel Distribution Processing (PDP), and general nativism. Although these views may have their shortfalls, they can be considered as game changers in the field of AI.